Azure Application Part 2: Access Azure Table Storage
This is part 2 in this series where I am building an Azure shopping cart application from the ground up. In this post, I will create a simplified ASP.NET version of the wine catalog. We will create a table in developer storage (the local version of Azure Storage) to store our wines and write 2 web pages: 1 to view all wines and another to add a wine. We will then access the same table in the cloud in Azure Table Storage.
(In part 1 I prepared my environment by setting up my development environment, creating a Windows Azure Account and creating a Hosted Service account. I then created a Web Cloud Service project in Visual Studio and wrote a very simple “Hello, World” example. I ran this sample locally and debugged it. I then deployed it to the cloud.)
Watch The Screencast
Add The 2 ASP.NET Web Pages
Let’s get started by adding the 2 ASP.NET web pages we are going to be working with in this post. We want a page to view all wines (we will work with paging in a subsequent post) and a page to add a wine. I don’t plan on illustrating editing a wine, but if I receive any feedback – hint, hint – I can change my plans.
- Right-Click on the AzureStore_WebRole project > Add > New Item
- Choose Web > Web Form > Name: ViewWines.aspx
- Add a GridView (WineGridView) and a Button (AddWineButton)
- Follow the same process as above to create a web form named AddWine.aspx
- Add the following controls using whatever layout you wish (I know – tables are so 2002. BTW, do you know when it is actually good practice to use tables? When you want a table)
- Download the images folder that contains the wine images from here
- Add the Images folder (and it’s contents) to the AzureStore_WebRole project
We’ll get back to these pages after we have our table set up…
A Brief Overview of Azure Tables
Azure Storage Tables are table-like structured storage. I say table-like because they do not conform to the rows / columns structure we are used to for tables. Azure Tables store Entities which are analogous to a row. Entities are comprised of name-value pairs called Properties. It is important to understand that Azure Tables do not enforce schemas. Therefore, a table can store entities with wholly different structures. However, the developer can choose to enforce schemas via code.
The Storage Account acts as the parent namespace for Table Storage, therefore you will not have any naming collisions with others that have similarly named tables. Here is the best visual I could come up with to illustrate the structure:
You may have noticed that, unlike some other cloud storage systems, the properties are typed. Here is a list of supported types:
There are 3 properties that every entity must have! They are:
- PartitionKey (string) – Tables are partitioned. The goal of Table Storage is to support massive scalability and availability. We therefore load balance across storage nodes. The partition key is used to tell the infrastructure that entities with the same partition key need to be stored together. The partition key forms half of the entities primary key.
- RowKey (string) – The RowKey is other half of the dual key.
- Timestamp (DateTime) – This property is used to store the last modified date and is used for things like optimistic concurrency.
Setting Up For Azure Tables
In a previous post, I discussed ad nauseam the REST API for Azure Table Storage, the role of ADO.NET Data Services, as well as the purpose of the StorageClient application so I am not going to go into much detail here. To summarize that post bullet style in less than 500 words:
- ADO.NET Data Services is a framework that allows one to easily expose and consume data services.
- The framework includes a server library that exposes data securely as RESTful services, as well as client libraries to enable consuming these services easily. One such library is the .NET Client Library.
- The ADO.NET Data Services team implemented a REST API that is compliant with the ADO.NET Services API, with some minor differences (for a listing of those differences, see this documentation).
- One of the notable differences is the Shared Key (Lite) Authentication extension. This is how requests to Azure Table Storage are authenticated.
- The Windows Azure SDK shipped with a StorageClient sample application that extends the .NET Client Library, adding functionality such as signing the requests.
We are going to take advantage of the StorageClient sample application, using it as our API. The nice thing about StorageClient and the .NET Client Library is that they completely abstract away the complexity of making these RESTful calls. They hide the HTTP calls, the serialization and deserialization, as well as the message signing. We just work with objects. Let’s get started coding by creating the object that will represent our entity.
Create The Object That Represents Our Entity
We need to start by creating a class that represents our entity. When fetching an entity from Table Storage, it will be deserialized into this type. We will also use this entity for actions. Let’s get started.
- Add a reference to System.Data.Services.Client
- Set a reference to the StorageClient library. You can either add the sample project to the solution and add a project reference or you can add a reference to the compiled dll. I am going to add the StorageClient project to the solution and add a project reference.
- Add a new class named Wine.cs to the AzureStore_WebRole project
- Add 5 properties to the class.
- string ShortWineName (can be auto-implemented)
- string WineLabelUri (can be auto-implemented)
- string Vintage(can be auto-implemented)
- double BottlePrice (can be auto-implemented)
- int WineID (do not make it auto-implemented)
- Implement class TableStorageEntity from StorageClient.
- You will have to import the Microsoft.Samples.ServiceHosting.StorageClient namespace
- Remember that every entity in Table Storage requires 3 properties: PartitionKey, RowKey and TimeStamp. The TableStorageEntity base class has these properties.
- Every entity in ADO.NET DataServices needs to have a unique key. ADO.NET DataServices uses a DataServiceKeyAttribute to annotate what the unique key is. As pointed out earlier, the PartitionKey and the RowKey form a dual key.
- Add a default constructor (this will be called upon deserialization), as well as a convenience constructor. In the overloaded constructor we can call the base class constructor, passing the PartitionKey and the RowKey. The base constructor simply sets those properties. In this example I am not partitioning wines, so I set the PartitionKey to be the same for all wines. I may address partitioning in a later post.
- If the convenience constructor is called, our entity is set up correctly and can be added to Table Storage. If the default constructor is called, you will note that the PartitionKey is set, but the RowKey is not. EVERY entity in TableStorage requires a PartitionKey, RowKey and TimeStamp. The infrastructure handles the TimeStamp, so we only need to set the other properties. It should now be clear why I didn’t use an auto-implemented property for WineID. We need to set the RowKey in the setter.
- That is it for the Wine Class. Here is the complete code:
Create Our DataServiceContext Class
In a previous post detailing the REST API for Azure Table Storage I discussed the role of the DataServiceContext, as well as what the StorageClient’s TableStorageDataServiceContext class brought to the table. To summarize, the DataServiceContext (part of the ADO.NET Data Services .NET Client Library) maintains state between interactions in order to support features such as identity resolution and optimistic concurrency. The TableStorageDataServiceContext subclasses the DataServiceContext, adding functionality such as digitally signing the requests for authentication purposes. Here we will create our TableStorageDataServiceContext:
- Add a class named CohoContext to the AzureStore_WebRole project
- Implement TableStorageDataServiceContext. You will need to import the Microsoft.Samples.ServiceHosting.StorageClient namespace.
- We now need to add a DataServiceQuery, exposing Wines, to the class. Quoting Mike Flasko and Elisa Flasko from Expose And Consume Data in A Web Services World, “The DataServiceQuery object represents a specific query against the store defined using the URI syntax”.
- That’s it for the DataServiceContext Class
Add Configuration Settings
We need to store the Table Storage configuration settings (our AccountName, our SharedKey used to sign the requests and the TableStorageEndpoint which is the URI to our TableStorage. Remember that we may have multiple instances of our roles, so we store the configuration settings in the cloud service project. Specifically, we add the configuration definitions in a csdef file, with the actual values in a cscfg. Let’s add our settings.
- Open the ServiceDefinition.csdef file from the AzureStore cloud service project.
- Add a ConfigurationSettings node just below the end InputEndpoints.
- Add 3 Settings: AccountName, AccountSharedKey and TableStorageEndpoint
- Open the ServiceConfiguration.cscfg file from the AzureStore cloud service project.
- Add the Settings with the development store values. The development store values are the same for all machines. Here is the full key: Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==
- We are done for now. We will update these settings later when we create our Table Storage project in the cloud. For now, we will work with the local development environment.
Create The Tables In The Development Store
The TableStorage class from the StorageClient sample has a static method named CreateTablesFromModel. You pass it your context and your account information for authentication and it will infer a list of tables from every type in the context class that is derived from DataServiceQuery. It will create those tables if they do not exist. However, this does not work for development storage. Not to worry, out friend Visual Studio knows how to do the same thing for us.
If you didn’t know, the developer storage actually uses Sql Server to store the data locally. Visual Studio can infer the list of tables, as well and will create the local tables. Let’s do it:
- Save and build your project
- Right-Click on the cloud service project (AzureStore) and choose ‘Create Test Storage Tables’
- Wait for the confirmation
- (Optional) Open Sql Server Management Studio and look at the new database (AzureStorage) and table (Wine)
That is it. We are ready to write some code, accessing the data. Let’s get to it.
Build A Class To Encapsulate The Data Calls
- Add a class named CohoDB to the AzureStore_WebRole project
- Add GetWines, AddWine and InitializeTables static methods to the class. We will call InitializeTables when working with the cloud storage.
Update Our Web Pages
Now it is time to consume our calls. Let’s start with ViewWines.aspx.
- Update the Page_Load with the following Code
- (Optional) Format the GridView. You can watch the screencast to see how I formatted the grid.
Now let’s update AddWine.aspx
- Wire up an event handler for the SaveButton Click event
- Implement the event handler. Create the wine, call our AddWine method and redirect to the ViewWines.aspx page.
- Set ViewWines as your start page and run the application (F5)
- No wines exist so the grid will be empty
- Click the AddWines button, add a wine and click Save
- Repeat a few times
Here is what you should see (if you formatted the grid like me):
I fully realize that this is still somewhat ugly. We will fix that in subsequent posts where we convert this to a Silverlight application. The goal of this post was to introduce how to access table storage and make use of it. The only thing we have left to do is to update the application to actually use the cloud and not the developer storage.
Consume Table Storage In The Cloud
This is easier than you might think. If you created a WIndows Azure Account, received your invitation token and redeemed it, all you need to do is to create a Storage Account.
- Open the Azure Services Developer Portal
- Click on Add Project
- Click on Storage Account
- Add a label, description and a name for your account. This name must be globally unique. Here is what you will see after you have created the project
- Open the ServiceConfiguration.cscfg file from the AzureStore cloud service project.
- Update the AccountName setting to the account name you created in step 4
- Update the AccountSharedKey to the Primary Access Key you are given for your account
- Update the TableStorageEndpoint to “http://table.core.windows.net/”. DO NOT INCLUDE THE ACCOUNT NAME IN THIS SETTING. In my case, I might be tempted to set the TableStorageEndpoint setting to “http://bagby.table.core.windows.net”, but THAT WOULD BE WRONG and I will be punished for it.
Once I have made those changes, I can test it again – Sort of. We have to do one small thing. Our Wine table was not created in the cloud. I need to make a call to TableStorage.CreateTablesFromModel in my code. It is good practice to treat this method call similarly to how you treat a CREATE DATABASE call in traditional applications. Many examples you will see will look like what you see below with the call in a Get or some other action.
While this works, it is inefficient. I will keep it for this example, as I <insert some semi-valid excuse to hide the fact that I am lazy and this is the end of a long post>.
Now we can access our data in the cloud. After you have added a few wines like you did in the local storage demo, open up your favorite HTTP sniffer. Mine is Fiddler. Take a look at the traffic to prove to yourself that you are accessing the cloud. Here is what it looks like for me:
In my next post, I will illustrate how you can expose your own custom services in Windows Azure. Catch you soon…