Demystifying The Code

Azure Table Storage, the REST and ADO.NET Data Services Story

Azure Table Storage is exposed via ADO.NET Data Services.  What the heck does that mean?  In this post, I will do my best to answer that question.  I will start with a brief overview of REST, the architectural style underlying ADO.NET Data Services.  As part of that topic, I will illustrate all we are doing when accessing Table Storage is issuing REST calls.  I will then discuss the role ADO.NET Data Services plays, as well as the role of the StorageClient sample application.

 

REST Overview

REST is an architectural style that embraces the standards of the web such as HTTP and URI.  Because it is an “architectural style” there are endless debates as to what is RESTful and just what REST means.  I only intend to provide a general overview of REST here, as addressing that debate is beyond the scope of this post.  However, if you are interested in an in-depth understanding of REST, please read/view my 10+ part series of posts and screencasts on the REST in WCF.

In general, exposing services using a RESTful architectural style simply means embracing and adhering to the standards of the web.  What the heck does that mean?  Here are some examples:

  • Treat everything as a resource and use URIs to uniquely identify resources.  My customers may be represented as http://www.robbagby.com/customers, whereas a single customer named fred may be represented as http://www.robbagby.com/customers/fred.  Finally my customers that have not paid may be represented as http://www.robbagby.com/deadbeatcustomers.
  • Use the appropriate HTTP Verbs (GET, PUT, POST, DELETE) for the appropriate action.  Issue an HTTP GET to http://www.robbagby.com/customers/fred fetch a representation of fred, whereas issue an HTTP DELETE to the same URI to perform a delete action.
  • Where appropriate, use standard representation formats (ATOM, XHTML, Microformats, etc.) to represent resources. 
  • Use HTTP Headers appropriately
    • Use HTTP Status Codes to alert the caller as to the result of the call.
    • The Accept HTTP Header can be used to alert the server to the representation format you would prefer.
    • Use the Authorization header for authorization credentials.

The above list is certainly not an exhaustive list, but should give you a general idea of what REST is.

 

Accessing Azure Table Storage via Fiddler

If Azure table is exposed via REST then we should be able to access it via Fiddler, right?  Well, we can.  The task is made a bit complex due to need for a valid authorization header.  When you create your Azure Storage project, a primary access key is created.  This is the key that you will use to sign your message.  This signed message gets passed in the authorization HTTP header.  The server will use this to authenticate your request.  If you have only used the StorageClient sample application as your API into Table Storage, then the authorization has been hidden (rightfully so) from you. 

In order to illustrate my point, I wrote a very simplistic application that will generate the URI to my table, an appropriate authorization header and the other required HTTP headers.  The following is a screenshot.

image

Here is how I did it: 

Some constants

image

I start by gathering the input.

image

Next, I calc some values:

  • The URI
  • Format the date according to RFC1123
  • Create the string to sign
  • Sign the string

image

Lastly, I list the HTTP headers, including the authorization containing the signed message.

image

That’s it.  We can use the output of our little sample application to access the data in my Wines table.  I will be using the Request Builder in Fiddler to issue my HTTP GET to the URI I calculated (http://bagby.table.core.windows.net/Wines), passing the HTTP headers I calculated, as well.  Here is a screenshot of the Request:

image

Here is a screenshot of (part of) the response:

image

As you can see, under the hood, all we are doing is making basic REST calls to Azure Table Storage.  While this knowledge may seem useless, I assure you that it is not.  I just got done writing a sample application illustrating how you can consume Azure Table Storage via a PHP application.  (That will be the subject of another blog post).  This knowledge was indeed helpful.  Well, now that we know what is going on at the lowest level, let’s explore the role ADO.NET Data Services plays.

 

ADO.NET Data Services

The goal of ADO.NET Data Services is to provide a framework to easily expose and consume data services.  The framework includes a server library that exposes data securely as RESTful services, as well as client libraries to enable consuming these services easily.  With regards to Azure Table Storage, the Azure team took care of implementing the server piece.  They chose to expose Azure Table Storage data with ADO.NET Data Services.  Armed with this knowledge, we can consume these services in a couple of ways: 1) consume them by hand or 2) take advantage of client libraries.

Consuming ADO.NET Data Services By Hand

Because ADO.NET Data Services is REST-based, any stack that is able to issue an HTTP request can consume these services.  In order to do so, you need to know the URI addressing scheme, which HTTP verbs to use and what the representation format of the response is.  Let’s examine each a bit further.

ADO.NET Data Services has a well documented URI addressing scheme.  All URIs are made up of three sections: service root, resource path, and query options.  The service root is simply the location of the root of the ADO.NET Data Service (ex www.robbagby.com/CustomerService.svc).   The resource path section is the part of the URI that identifies a resource.  Lastly, the query options section is where you can “represent directives that ADO.NET Data Services use to control the response to a request”.  The details of each of these are well documented here.  Someone calling a service by hand has all of the information they need to manually construct the URI.

The framework essentially uses HTTP as the API.  As such, the verbs like POST, GET, PUT, and DELETE are used for basic CRUD operations.  For instance, issuing an HTTP GET request to a URI will fetch that resource, whereas issuing a DELETE to the same URI will perform a delete action on that same resource.

ADO.NET Data Services support standard representation formats such as ATOM and JSON.  This makes it easier to work with the response.  These are well documented formats and many stacks have serializers that understand how to work with these formats.

To summarize, it is clearly possible to consume ADO.NET Data Services manually.  In fact, we illustrated that in the demo above.  The process begins with constructing the URI to the appropriate resource.  The next step is to issue the HTTP Request with the appropriate HTTP headers and HTTP method.  The final step is to handle the response by parsing it or taking advantage of a serializer (we skipped this step above).

Taking Advantage of Client Libraries

While it is possible to consume these services by hand, it is certainly not easy.  For that reason, ADO.NET Services provides client libraries to allow differing technologies such as .NET, Silverlight and AJAX clients to easily access these services.  These libraries abstract away the complexities of issuing HTTP Requests, as well as provides the ability to handle the response as objects.  Currently, there are libraries for .NET, AJAX, as well as Silverlight clients.

The following excerpt from Expose And Consume Data in A Web Services World by Mike Flasko and Elisa Flasko does a good job at providing a high level overview of the Silverlight and .NET client libraries for ADO.NET Data Services:

“The .NET Framework and Silverlight client libraries consist of two primary types, the DataServiceContext class and the DataServiceQuery class. DataServiceContext represents the runtime context for a given data service. Data services themselves are stateless; however, the context in which the developer interacts is not, and state on the client is maintained between interactions in order to support features such as identity resolution and optimistic concurrency. The DataServiceQuery object represents a specific query against the store defined using the URI syntax. To execute a query and obtain the results in the form of .NET objects, you simply enumerate over the query object. You can do so using the standard "foreach" construct in C# or "For Each" in Visual Basic®, for example.”

The following example illustrates the ease by which one can consume ADO.NET Data Services Data with the .NET Client library.  Please note that this example is consuming ADO.NET Data Services exposing an Entity Framework Entity Data Model not consuming Azure Table Storage Data – we’ll get into that later.

image

We started by creating an instance of the client library type DataServiceContext.  Our context happened to expose a DataServiceQuery named Wines.  We simply wrote a nice LINQ query, targeting that DataServiceQuery.  Under the hood, the LINQ query was evaluated into the appropriate URI.  The URI looks something like this:

http://localhost/AstoriaSample/Coho.svc/Wines()?$filter=WineType/WineTypeName eq ‘Reds’&$expand=WineType

When we began to iterate over the results, an HTTP GET was issued to that URI.  The response was automatically deserialized into client-side Wine and WineType proxy classes for us.

So where did we get these client proxy classes (the DataServiceContext, Wine Class, etc.).   We got them by either calling DataServiceUtil directly or by right-clicking on our project and choosing “Add Service Reference”, passing the service root URI.  Either way, all of the proxy classes are generated automatically.  While, in many ways, these are really just implementation details that are (and should be) abstracted away from you, having a high-level understanding of what these are goes a long way in understanding the role of the StorageClient Azure Sample.

 

The StorageClient Sample Application

The Windows Azure SDK shipped with a sample called StorageClient.  At the time of writing this post, this sample is really the API you will typically use to consume Azure Table Storage if you are coding in .NET.  That probably seems confusing as I just got done explaining how we already have a great .NET client library for ADO.NET Data Services.  Don’t worry, the sample still uses those libraries.  However, the StorageClient adds some Azure-specific logic.  The most important of which is the logic to digitally sign the requests.

As you can see below, the StorageClient includes a TableStorageDataServiceContext that implements the .NET client library DataServiceContext.  This subclass wires up an event handler for the SendingRequest event.  In that handler resides the signing logic.  All of this is abstracted neatly away from the consumer.

image

image 

 

Consuming Azure Table Storage via the StorageClient

Below is some code that is targeting Azure Table Storage via the StorageClient.  You will notice that the pattern is roughly the same as before.  The caller need not be aware of the complexity of the call or signing the message.  All they need to do is set up their configuration correctly (3 settings) and have at it.

image

There is certainly more to the StorageClient than I have shown, but I hope that this post has helped you understand a little how Azure Table Storage is exposed RESTfully, as well as the role of ADO.NET Data Services and the StorageClient sample application.

Speak Your Mind

Tell us what you're thinking...
and oh, if you want a pic to show with your comment, go get a gravatar!

Demystifying The Code