Partager via


Introducing Storage Client Library 2.1 RC for .NET and Windows Phone 8

We are pleased to announce the public availability of 2.1 Release Candidate (RC) build for the storage client library for .NET and Windows Phone 8. The 2.1 release includes expanded feature support, which this blog will detail.

Why RC?

We have spent significant effort in releasing the storage clients on a more frequent cadence as well as becoming more responsive to client feedback. As we continue that effort, we wanted to provide an RC of our next release, so that you can provide us feedback that we might be able to address prior to the “official” release. Getting your feedback is the goal of this release candidate, so please let us know what you think.

What’s New?

This release includes a number of new features, many of which have come directly from client feedback (so please keep this coming), which are detailed below.

Async Task Methods

Each public API now exposes an Async method that returns a task for a given operation. Additionally, these methods support pre-emptive cancellation via an overload which accepts a CancellationToken. If you are running under .NET 4.5, or using the Async Targeting Pack for .NET 4.0, you can easily leverage the async / await pattern when writing your applications against storage.

Table IQueryable

In 2.1 we are adding IQueryable support for the Table Service layer on desktop and phone. This will allow users to construct and execute queries via LINQ similar to WCF Data Services, however this implementation has been specifically optimized for Windows Azure Tables and NoSQL concepts. The snippet below illustrates constructing a query via the new IQueryable implementation:

 

 var query = from ent in currentTable.CreateQuery<CustomerEntity>()where ent.PartitionKey == “users” && ent.RowKey = “joe”select ent; 

 

The IQueryable implementation transparently handles continuations, and has support to add RequestOptions, OperationContext, and client side EntityResolvers directly into the expression tree. To begin using this please add a using to the Microsoft.WindowsAzure.Storage.Table.Queryable namespace and construct a query via the CloudTable.CreateQuery<T>() method. Additionally, since this makes use of existing infrastructure optimization such as IBufferManager, Compiled Serializers, and Logging are fully supported.

Buffer Pooling

For high scale applications, Buffer Pooling is a great strategy to allow clients to re-use existing buffers across many operations. In a managed environment such as .NET, this can dramatically reduce the number of cycles spent allocating and subsequently garbage collecting semi-long lived buffers.

To address this scenario each Service Client now exposes a BufferManager property of type IBufferManager. This property will allow clients to leverage a given buffer pool with any associated objects to that service client instance. For example, all CloudTable objects created via CloudTableClient.GetTableReference() would make use of the associated service clients BufferManager. The IBufferManager is patterned after the BufferManager in System.ServiceModel.dll to allow desktop clients to easily leverage an existing implementation provided by the framework. (Clients running on other platforms such as WinRT or Windows Phone may implement a pool against the IBufferManager interface)

Multi-Buffer Memory Stream

During the course of our performance investigations we have uncovered a few performance issues with the MemoryStream class provided in the BCL (specifically regarding Async operations, dynamic length behavior, and single byte operations). To address these issues we have implemented a new Multi-Buffer memory stream which provides consistent performance even when length of data is unknown. This class leverages the IBufferManager if one is provided by the client to utilize the buffer pool when allocating additional buffers. As a result, any operation on any service that potentially buffers data (Blob Streams, Table Operations, etc.) now consumes less CPU, and optimally uses a shared memory pool.

.NET MD5 is now default

Our performance testing highlighted a slight performance degradation when utilizing the FISMA compliant native MD5 implementation compared to the built in .NET implementation. As such, for this release the .NET MD5 is now used by default, any clients requiring FISMA compliance can re-enable it as shown below:

 

 CloudStorageAccount.UseV1MD5 = false;

 

Client Tracing

The 2.1 release implements .NET tracing, allowing users to enable log information regarding request execution and REST requests (See below for a table of what information is logged). Additionally, Windows Azure Diagnostics provides a trace listener that can redirect client trace messages to the WADLogsTable if users wish to persist these traces to the cloud.

To enable tracing in .NET you must add a trace source for the storage client to the app.config and set the verbosity

 

  <system.diagnostics>
  <sources>
    <source name="Microsoft.WindowsAzure.Storage">
      <listeners>
        <add name="myListener"/>
      </listeners>
    </source>
  </sources>
  <switches>
    <add name="Microsoft.WindowsAzure.Storage" value="Verbose" />
  </switches>
…

 

 

The application is now set to log all trace messages created by the storage client up to the Verbose level. However, if a client wishes to enable logging only for specific clients or requests they can further configure the default logging level in their application by setting OperationContext.DefaultLogLevel and then opt-in any specific requests via the OperationContext object:

 

 // Disbable Default Logging
OperationContext.DefaultLogLevel = LogLevel.Off;// Configure a context to track my upload and set logging level to verbose
OperationContext myContext = new OperationContext() { LogLevel = LogLevel.Verbose };blobRef.UploadFromStream(stream, myContext);

 

New Blob APIs

In 2.1 we have added Blob Text, File, and Byte Array APIs based on feedback from clients. Additionally, Blob Streams can now be opened, flushed, and committed asynchronously via new Blob Stream APIs.

New Range Based Overloads

In 2.1 Blob upload API’s include an overload which allows clients to only upload a given range of the byte array or stream to the blob. This feature allows clients to avoid potentially pre-buffering data prior to uploading it to the storage service. Additionally, there are new download range API’s for both streams and byte arrays that allow efficient fault tolerant range downloads without the need to buffer any data on the client side.

IgnorePropertyAttribute

When persisting POCO objects to Windows Azure Tables in some cases clients may wish to omit certain client only properties. In this release we are introducing the IgnorePropertyAttribute to allow clients an easy way to simply ignore a given property during serialization and de-serialization of an entity. The following snippet illustrates how to ignore my FirstName property of my entity via the IgnorePropertyAttribute:

 

 public class Customer : TableEntity
{
   [IgnoreProperty]
   public string FirstName { get; set; }
}

 

Compiled Serializers

When working with POCO types previous releases of the SDK relied on reflection to discover all applicable properties for serialization / de-serialization at runtime. This process was both repetitive and expensive computationally. In 2.1 we are introducing support for Compiled Expressions which will allow the client to dynamically generate a LINQ expression at runtime for a given type. This allows the client to do the reflection process once and then compile a lambda at runtime which can now handle all future read and writes of a given entity type. In performance micro-benchmarks this approach is roughly 40x faster than the reflection based approach computationally.

All compiled expressions for read and write are held in a static concurrent dictionaries on TableEntity. If you wish to disable this feature simply set TableEntity.DisableCompiledSerializers = true;

Easily Serialize 3rd Party Objects

In some cases clients wish to serialize objects in which they do not control the source, for example framework objects or objects form 3rd party libraries. In previous releases clients were required to write custom serialization logic for each type they wished to serialize. In the 2.1 release we are exposing the core serialization and de-serialization logic for any CLR type via the static TableEntity.[Read|Write]UserObject methods. This allows clients to easily persist and read back entities objects for types that do not derive from TableEntity or implement the ITableEntity interface. This pattern can also be especially useful when exposing DTO types via a service as the client will longer be required to maintain two entity types and marshal between them.

Numerous Performance Improvements

As part of our ongoing focus on performance we have included numerous performance improvements across the APIs including parallel blob upload, table service layer, blob write streams, and more. We will provide more detailed analysis of the performance improvements in an upcoming blog post.

Windows Phone

The Windows Phone client is based on the same source code as the desktop client, however there are 2 key differences due to platform limitations. The first is that the Windows Phone library does not expose synchronous methods in order to keep applications fast and fluid. Additionally, the Windows Phone library does not provide MD5 support as the platform does not expose an implementation of MD5. As such, if your scenario requires it, you must validate the MD5 at the application layer. The Windows Phone library is currently in testing and will be published in the coming weeks. Please note that it will only be compatible with Windows Phone 8, not 7.x.

Summary

We have spent considerable effort in improving the storage client libraries in this release. We welcome any feedback you may have in the comments section below, the forums, or GitHub.

Joe Giardino

Resources

Getting the most out of Windows Azure Storage – TechEd NA ‘13

Nuget

Github

2.1 Complete Changelog

Comments

  • Anonymous
    July 13, 2013
    Where is the cors support?

  • Anonymous
    July 14, 2013
    @Luiz,CORS will be released sometime this Fall.

  • Anonymous
    July 26, 2013
    I get an ArgumentNullException whenever I try to deserialize a 3rd party object with ReadUserObject if that object has something like a List<> in it. It serializes just fine, ignoring the property types that aren't supported. But you cannot deserialize.

  • Anonymous
    July 29, 2013
    @ Keith Murray Thanks for your comment. We have identified this issue and resolved it in the refresh release which will be available later this week.

  • Anonymous
    August 06, 2013
    The comment has been removed

  • Anonymous
    August 13, 2013
    Can I use IQueryable with DynamicTableEntity and complex conditions such as .Where(x => x.PartitionKey.StartsWith("something" && x.SomeProperty.TrimEnd(';').EndsWith("1234")))

  • Anonymous
    August 13, 2013
    The comment has been removed

  • Anonymous
    August 14, 2013
    @Deyan / Ryan I am sorry for the delay in the Windows Phone 8 library. The blog noted it would be published in the coming weeks, but that has been slightly delayed. We are now targeting releasing it as part of the RC refresh for 2.1 in the same package. During this time frame OdataLib has refactored to include a portable library for windows phone so we have decided to move to that and restart testing. (See ODataLib 5.6.0-rc1 on nuget for more details).  We will update this blog when the library is available publicly.

  • Anonymous
    August 14, 2013
    @Amit Yes you absolutely can do complex queries with DynamicTableQuery, however your example uses a few methods that are not supported such as startswith, trimend, endswith etc. The way to correctly do starts with is to use a lexically bounded query that specifies both upper and lower boundaries. For example I would use PartitionKey gte something && PartitionKey lt something{ ( note '{' is 1 + 'z' in AsciiTable). The way to do these comparisions is via the String.CompareTo method; Additionally when using a query of DynamicTableEntity type and addressing properties which are not included in ITableEntity (PartitionKey, RowKey,Timestamp)you need to access the property via the dictionary and use the correct typed getter. The library will be able to correctly construct the filter string by analyzing the property name argument used for the dictionary accessor and the type of the getter.  For example: var query = from ent in table.CreateQuery<DynamicTableEntity>() where ent.Properties["customerid"].StringValue == "customer_1"                     select ent; We have a draft of a more in depth blog that will detail specific examples of the IQueryable optimizations in the library that will be forth coming.

  • Anonymous
    August 16, 2013
    Are there any plans to introduce batching for Retrieve operations? (that is a big performance win) Are there plans to introduce/support compression? (especially for moving large amounts of data) Increasing the batch count would be huge tooo...at least support up to 4meg.

  • Anonymous
    August 19, 2013
    Is there any sample code on how to handle continuations?

  • Anonymous
    August 20, 2013
    Hi. Please see the following Microsoft connect bug report: connect.microsoft.com/.../azure-sdk-compute-emulator-controller-actions-hit-multiple-times-in-asp-net-mvc-4-app Thanks.

  • Anonymous
    August 21, 2013
    The comment has been removed

  • Anonymous
    August 25, 2013
    System.Spatial seems to install OK with version 5.6.0 on WP8. So why are you forcing version 5.2.0 in the RC nugetpackage?

  • Anonymous
    August 26, 2013
    Nothing has been announced. That being said, the JSON support coming by end of CY13 will dramatically reduce payload sizes to and from the server which will have a dramatic affect on latencies, especially on slower network connections. In some cases payloads are reduced 45%-65% depending on the scenario and if type metadata is returned by the server.

  • Anonymous
    August 28, 2013
    Can someone provide an scenario when I would use the BufferManager??? I understand the concept, but it seems strange that it is attached to one of the storage client classes. Also, will the JSON bits actually be out this fall???

  • Anonymous
    August 28, 2013
    The comment has been removed

  • Anonymous
    August 28, 2013
    @ Libin Below is a quick sample that will execute a Query in segmented fashion and aggregate the results in a list.            TableQuery<DynamicTableEntity> query = (from ent in testTable.CreateQuery<DynamicTableEntity>()                                                    where ent.PartitionKey == partitionKey                                                    select ent).AsTableQuery();            TableContinuationToken continuationToken = null;            List<DynamicTableEntity> entities = new List<DynamicTableEntity>();            do            {                TableQuerySegment<DynamicTableEntity> segment = await query.ExecuteSegmentedAsync(continuationToken);                continuationToken = segment.ContinuationToken;                entities.AddRange(segment.Results);            }            while (continuationToken != null); Hope this helps. Note that you may also specify a maxresults in the ExecuteSegmented method to use smaller pages depending on your scenario. joe

  • Anonymous
    August 28, 2013
    @ jgauffin 2.1 will utilized Odata 5.2 due to constraints in the broader Azure SDK. The next release (2.2) will snap to the latest release in order to support newer features such as JSON etc. For WP specifically the Odata team has moved to shipping a portable library that we have decided to snap to as this will be the supported model going forward. This reset has caused some delays, but we hope to have the CTP of the WP lib our shortly.

  • Anonymous
    August 28, 2013
    @ mike The BufferManager is there to provide reuse of buffers across any operations that need to potentially buffer data. For example BlobWriteStream (available calling a Cloud[Block|Page}Blob.OpenWrite method) will pre-buffer one block / page range at a time prior to executing a request to the server. This feature also helps support parallelism by allowing N parallel operations happening on pre-buffered chunks of data. In this case the BufferManager can be used to only create / maintain a much smaller number of buffers than simply newing / GC'ing them constantly. Table requests also require ContentLength to be known, but this is unavailable prior to serialization as we cannot predict the size of the resulting Odata formatted payload. As such this data has to be prebuffered and then sent for all requests. The BufferManager can help in this situation as well. The reason the IBufferManager is on the Client objects is that the client reference is held by all associated objects (blobs, containers, tables, queues, etc.) and therefore will can be leveraged by a wide number of objects in a convenient and centralized fashion. With the BufferManager in place time in GC is dramatically reduced as not only do we create and GC fewer number of objects, but the buffers themselves are long lived and get promoted to a higher generation causing subsequent GCs to be even quicker. I would recommend anyone doing heavy Blob Upload or Tables Traffic to enable this feature, but it is optional. Also note, a given BufferManager implementation does not have to guarantee that buffers are Zero'd out, so please take this in consideration when handling sensitive information as pieces of it may be around in memory for some time after the actual request is executed. (If this is a concern you can simply zero out the buffer in the IBufferManager implementation yourself). joe

  • Anonymous
    September 03, 2013
    The comment has been removed

  • Anonymous
    September 29, 2013
    Where is the right place to post bugs? I'm currently experiencing some very peculiar behavior. Over a WiFi network, my Windows Phone 8 app behaves properly. However on my AT&T network, I receive "not found" errors when trying to call the CreateIfNotExistsAsync() method on a CloudQueue instance. Any help would be greatly appreciated.

  • Anonymous
    September 30, 2013
    @Brenton For bug tracking please raise an issue at the GitHub repo : github.com/.../issues We will look into this issue further and update.

  • Anonymous
    October 09, 2013
    Hi, all. Do you have a release date for the next version of the Azure Storage Library? Hopefully this version will fully support Windows Phone 8. For the moment I get the same frustrating package installation errors, as others have mentioned on the comments section, due to incompatible dependencies to the WP8 platform: Install-Package : Could not install package 'System.Spatial 5.2.0'. You are trying to install this package into a project that targets 'WindowsPhone,Version=v8.0 Do anyone have any suggestions/ workarounds that can help me until the next version comes around? Regards Palchris

  • Anonymous
    October 10, 2013
    @palchris and @ctarmor Please install the package at www.nuget.org/.../WindowsAzure.Storage-Preview which can be used in a Windows Phone 8 or a Windows Store app project.

  • Anonymous
    October 13, 2013
    Thanks Serdar! The nuget package manager now installed the packages correctly into my wp8 project!

  • Anonymous
    October 28, 2013
    The comment has been removed

  • Anonymous
    October 29, 2013
    @IsakAvis These samples are now posted @ code.msdn.microsoft.com/Windows-Azure-Storage-675fe55b.