Share via


Deep dive on the offline support in the managed client SDK

Last week we announced a new feature in the Azure Mobile Service SDK (managed code only for now): support for offline handling of data. While before all table operations required an active internet connection, with this new support the application can store table operations in a local data store, and later when connected push the changes to the mobile service (and also pull changes from the service into the local table). The local data store is defined by an interface, so you can use whatever implementation you want, but we also released a new NuGet package with a SQLite-based implementation of the store to help you get started quickly.

New table types: IMobileServiceSyncTable and IMobileServiceSyncTable<T>

One thing which we decided to do when implementing the offline support is that we wanted the developers to be aware of what kind of data would be offline and which data would be online. We thought about implementing the support in a transparent way and data would be synchronized to the server whenever a connection could be established, but almost always the “magic” that we would implement would be wrong. Instead, we released an alpha version of the SDK with offline capabilities (meaning that when managing the NuGet packages you’ll need to select the “Include prerelease” option in the combo box on top of the package names), and based on the feedback we get from all of you we’ll decide the direction to go next (which can be implementing a full “auto-sync” framework).

In practice, that means that to use the offline feature, you’ll need to use a different kind of table. There are two new methods in the MobileServiceClient class: GetSyncTable(string tableName) and GetSyncTable<T>(), which return instances of the IMobileServiceSyncTable and IMobileServiceSyncTable<T> interfaces, respectively. They’re in most ways very similar to the IMobileServiceTable and IMobileServiceTable<T> interfaces, exposing similar methods. The biggest difference is that for the local (sync) tables, there are no overloads for the CRUD operations with additional query string parameters (there’s no HTTP request going on the operation on those tables). Other differences include the fact that sync tables only work with entities with string ids (those with integer ids are not supported locally) and the shape of the response for insert and update operations (on regular tables, the responses can be any JSON value, including arrays and primitives; for sync tables they need to be objects).

One more thing which I think is important – a few people have asked me why the interface was named sync table – why not name it local table, which is : the interface name tells us something about its behavior. We’re not talking simply about a local table. It’s an object which can synchronize the state of the remote table (on Azure) with the table in the local store. We’ll get to more details on the synchronization later.

Let’s get coding. To start, let’s try to insert an item into a sync table to see what happens…

  1. var client = newMobileServiceClient(ApplicationUri, ApplicationKey);
  2.  
  3. var table = client.GetSyncTable<TodoItem>();
  4. AddToDebug("Table: {0}", table.TableName);
  5.  
  6. var item = newTodoItem { Text = "Buy bread", Complete = false };
  7. await table.InsertAsync(item);
  8. AddToDebug("Inserted into local store: {0}", item.Id);

When we run that, it doesn’t work. Instead, we get the following exception:

System.InvalidOperationException: SyncContext is not yet initialized.

The problem is that the SDK has no idea about where to store the local data. Before using any of the sync operations, we first need to initialize the synchronization context of the client so that those operations can start working.

The synchronization context

In addition to the two new methods to get sync table instances from the client, there’s a new property, SyncContext, which needs to be initialized with an instance of the actual local store where the data will be saved. Before you can use any of the local operations, the sync context in the client needs to be initialized with an IMobileServiceLocalStore object. That means that you can define whatever mechanism to store the local data, but the large majority of developers doesn’t need to go into that level of details, so we’ve also released an implementation of that interface based on a SQLite database. To access that store implementation (in the class MobileServiceSQLiteStore) you’ll need a new NuGet package, Azure Mobile Services SQLiteStore. On to fixing the code above. When we instantiate the store, we need to define the tables which will be used to store data locally. There are two ways to define the tables that the SDK will use: you can either pass a JObject instance containing the properties which will be stored to the DefineTable method, or even easier, you can also use the DefineTable<T> method where you just pass the type which will be used as the generic parameter, and the SDK will deal with it.

  1. var client = newMobileServiceClient(ApplicationUri, ApplicationKey);
  2.  
  3. var store = newMobileServiceSQLiteStore(StoreFileName);
  4. store.DefineTable<TodoItem>();
  5. AddToDebug("Defined table in the store");
  6.  
  7. await client.SyncContext.InitializeAsync(store);
  8. AddToDebug("Initialized the sync context");
  9.  
  10. var table = client.GetSyncTable<TodoItem>();
  11. AddToDebug("Table: {0}", table.TableName);
  12.  
  13. var item = newTodoItem { Text = "Buy bread", Complete = false };
  14. await table.InsertAsync(item);
  15. AddToDebug("Inserted into local store: {0}", item.Id);

Now, if you run the code above it will work – a TodoItem table will be created in the local store and an item will be added to it.

The synchronization context is mainly responsible for, well, synchronizing the data between the local database (represented by the local store) and the remote database (accessed via the Azure Mobile Service). This synchronization is done via an explicit push / pull mechanism which must be invoked by the developer – at this point we don’t have any “auto-sync” framework which will handle those calls automatically, but this feature may be implemented in a future version of the SDK.

So back to the synchronization context. Once we start calling operations on the local tables, those operations start getting queued up by the sync context. Those operations become “pending” and are persisted locally, so that even if the application is closed and reopened, the list of pending operations is returned. You can check the number of operations which are pending by looking at the PendingOperations property in the synchronization context. As more operations are executed in the local tables, the queue will grow, until there is a synchronization event (which we’ll talk more in the next section). Let’s look at that property a little closer, by expanding the example above and performing additional operations in the local table.

  1. var client = newMobileServiceClient(ApplicationUri, ApplicationKey);
  2.  
  3. var store = newMobileServiceSQLiteStore(StoreFileName);
  4. store.DefineTable<TodoItem>();
  5. AddToDebug("Defined table in the store");
  6.  
  7. await client.SyncContext.InitializeAsync(store);
  8. AddToDebug("Initialized the sync context");
  9.  
  10. AddToDebug("Pending operations in the sync context queue: {0}", client.SyncContext.PendingOperations);
  11.  
  12. var table = client.GetSyncTable<TodoItem>();
  13. AddToDebug("Table: {0}", table.TableName);
  14.  
  15. var item = newTodoItem { Text = "Buy bread", Complete = false };
  16. await table.InsertAsync(item);
  17. AddToDebug("Inserted into local store: {0}", item.Id);
  18.  
  19. AddToDebug("Pending operations in the sync context queue: {0}", client.SyncContext.PendingOperations);
  20.  
  21. item = newTodoItem { Text = "Buy milk", Complete = false };
  22. await table.InsertAsync(item);
  23.  
  24. AddToDebug("Pending operations in the sync context queue: {0}", client.SyncContext.PendingOperations);
  25.  
  26. var thingsToDo = await table.Where(t => !t.Complete).Select(t => t.Text).ToListAsync();
  27. AddToDebug("Things to do {0}", string.Join(", ", thingsToDo));
  28.  
  29. AddToDebug("Pending operations in the sync context queue: {0}", client.SyncContext.PendingOperations);
  30.  
  31. item.Complete = true;
  32. await table.UpdateAsync(item);
  33. AddToDebug("Updated item: {0}", item.Id);
  34.  
  35. AddToDebug("Pending operations in the sync context queue: {0}", client.SyncContext.PendingOperations);

As I mentioned before, the sync tables have basically the same API as “regular” (remote) tables, so queries, updates and inserts (as well as deletes, not shown above) look just like methods on regular tables. As we run the code below (assuming that the local store was empty) we’ll get an output similar to the one shown below. It looks expected – when we insert the first item the pending operation count goes to one; when we insert another item the count is incremented once more; when we read from the local table the count is not incremented – read operations are not synchronized. But there’s one interesting aspect which happens when we update one of the items which we had just inserted: the number of operations in the queue is not changed. What the current implementation of the synchronization context does is to “merge” pending operations for the same item, so that during synchronization only one operation (in this case, insert) is sent to the server (where the value of the item is the most recent one).

Defined table in the store
Initialized the sync context
Pending operations in the sync context queue: 0
Table: TodoItem
Inserted into local store: 9e61196d-55df-4869-8b30-4a4a6eb792f2
Pending operations in the sync context queue: 1
Inserted another item into the local store: c349b30d-d603-48db-8509-b2fd170f4499
Pending operations in the sync context queue: 2
Things to do Buy bread, Buy milk
Pending operations in the sync context queue: 2
Updated item: c349b30d-d603-48db-8509-b2fd170f4499
Pending operations in the sync context queue: 2

I talked about synchronization operations without introducing them. Let’s look at them now.

Push / pull / purge

There are three basic operations which can trigger synchronization. The simplest of all is the PushAsync method in the synchronization context. Once that method is called, the changes which were performed in the local table are sent over to the server. In the example below, there will be one more item in the server once the call to PushAsync is completed (or maybe more, if there were other insert operations pending in the synchronization queue).

  1. var localTable = client.GetSyncTable<TodoItem>();
  2. var remoteTable = client.GetTable<TodoItem>();
  3.  
  4. var remoteItems = await remoteTable
  5.     .Select(i => i.Text)
  6.     .ToListAsync();
  7. AddToDebug("Items from the server: {0}", string.Join(", ", remoteItems));
  8.  
  9. var item = newTodoItem { Text = "Buy bread", Complete = false };
  10. await localTable.InsertAsync(item);
  11. AddToDebug("Inserted into local store: {0}", item.Id);
  12.  
  13. await client.SyncContext.PushAsync();
  14. AddToDebug("Pushed the local changes to the server");
  15.  
  16. remoteItems = await remoteTable
  17.     .Select(i => i.Text)
  18.     .ToListAsync();
  19. AddToDebug("Items from the server: {0}", string.Join(", ", remoteItems));

Push is executed on the whole context, not on specific tables. It’s implemented this way to support relationships between entities on the client side. For example, if you have an “Order” and an “OrderItem” table, you can insert an item in the first table, and having the id of that entity, insert the children items with the appropriate foreign key. And when the operations are sent to the server, they will be sent in order so that any FK relationships in the database will be satisfied.

The other operation which triggers a synchronization is a call to PullAsync on the local table. That call can either pull all items from the remote table, or just pull a subset of the items. Only pulling some items from the table is often advisable, as stuffing everything from the (potentially large) remote database table into the (restricted by the device memory) local table may have some bad performance implications. You can pass an OData-formatted query to select which items to pull from the server, or you can also use the (more friendly) Linq expressions to determine the query of items to be pulled.

  1. var localTable = client.GetSyncTable<TodoItem>();
  2. var query = localTable.Where(t => !t.Complete);
  3. await localTable.PullAsync(query);
  4.  
  5. var localItems = await localTable
  6.     .Select(i => i.Text)
  7.     .ToListAsync();
  8. AddToDebug("Items from the server (in the local table): {0}", string.Join(", ", localItems));

One important thing to notice regarding pull operations – if there are items in the pending synchronization queue, those items are first pushed over to the server, then the pull operation takes place. That prevents a scenario where an update is done to a local item, but a pull operation would overwrite the changes locally and potentially leave the data in an inconsistent state. That’s one the first synchronization rule: a pull triggers a push. In the example below, the insert operation for the “Buy milk” item will first be pushed to the server, then the items will be pulled into the local table.

  1. await client.SyncContext.InitializeAsync(store);
  2. AddToDebug("Initialized the sync context");
  3.  
  4. var localTable = client.GetSyncTable<TodoItem>();
  5. var item = newTodoItem { Text = "Buy milk", Complete = false };
  6. await localTable.InsertAsync(item);
  7.  
  8. var query = localTable.Where(t => !t.Complete);
  9. await localTable.PullAsync(query);
  10.  
  11. var localItems = await localTable
  12.     .Select(i => i.Text)
  13.     .ToListAsync();
  14. AddToDebug("Items from the server (in the local table): {0}", string.Join(", ", localItems));

Another operation which triggers a synchronization event is a call to PurgeAsync on the local table. Often we want to clear the local cache to update the data which the application doesn’t need anymore. For example, the canonical TODO app, we only display the items in the client which are not complete. In this case, there’s no need to store locally any items which have already been complete. A call to purge such items can be done as shown below.

  1. var localTable = client.GetSyncTable<TodoItem>();
  2. await localTable.PurgeAsync(localTable.Where(t => t.Complete));
  3.  
  4. var query = localTable.Where(t => !t.Complete);
  5. await localTable.PullAsync(query);
  6.  
  7. var localItems = await localTable
  8.     .Select(i => i.Text)
  9.     .ToListAsync();
  10. AddToDebug("Items from the server (in the local table): {0}", string.Join(", ", localItems));

Notice that, just like in the pull case, a call to purge will first send any pending operations to the server (the second synchronization rule: purge also triggers a push). This way, if we had marked an item as complete locally, we want to make sure that this information is in the server before we remove the item from the local store.

Handling conflict errors

Until now we’ve looked at many synchronization scenarios where everything works fine. There are cases, however, where errors happen. If there are multiple sources changing a single entity (such as a row in the database), you may have conflicts when a second update is attempted (since the version of the item will have change – for more information see this document on optimistic concurrency implementation on the server). In this case, a push operation would fail. Take the code below: the item is updated in the remote table, but when we try to push the update to the local item, since the version of the items will not match, the push operation will fail, and a MobileServicePushFailedException will be thrown. The exception has a list of all errors which happened for the individual elements from the synchronization queue.

  1. var localTable = client.GetSyncTable<TodoItem>();
  2. var remoteTable = client.GetTable<TodoItem>();
  3. await localTable.PullAsync();
  4.  
  5. var firstItem = (await localTable.Take(1).ToEnumerableAsync()).FirstOrDefault();
  6. var firstItemCopy = newTodoItem
  7. {
  8.     Id = firstItem.Id,
  9.     Version = firstItem.Version,
  10.     Text = firstItem.Text,
  11.     Complete = firstItem.Complete
  12. };
  13.  
  14. firstItemCopy.Text = "Modified";
  15. await remoteTable.UpdateAsync(firstItemCopy);
  16. AddToDebug("Updated the item on the server");
  17.  
  18. firstItem.Text = "Modified locally";
  19. await localTable.UpdateAsync(firstItem);
  20. AddToDebug("Updated the same item in the local table");
  21.  
  22. AddToDebug("Number of pending operations: {0}", client.SyncContext.PendingOperations);
  23. await client.SyncContext.PushAsync();

There are scenarios where you want to catch and deal with the synchronization conflicts in the client. You can control all the synchronization operations by implementing the IMobileServiceSyncHandler interface and passing it when initializing the context. For example, this is an implementation of a sync handler which traces all the operations which are happening.

  1. classMySyncHandler : IMobileServiceSyncHandler
  2. {
  3.     MainPage page;
  4.  
  5.     public MySyncHandler(MainPage page)
  6.     {
  7.         this.page = page;
  8.     }
  9.  
  10.     publicTask<JObject> ExecuteTableOperationAsync(IMobileServiceTableOperation operation)
  11.     {
  12.         page.AddToDebug("Executing operation '{0}' for table '{1}'", operation.Kind, operation.Table.Name);
  13.         return operation.ExecuteAsync();
  14.     }
  15.  
  16.     publicTask OnPushCompleteAsync(MobileServicePushCompletionResult result)
  17.     {
  18.         page.AddToDebug("Push result: {0}", result.Status);
  19.         foreach (var error in result.Errors)
  20.         {
  21.             page.AddToDebug(" Push error: {0}", error.Status);
  22.         }
  23.  
  24.         returnTask.FromResult(0);
  25.     }
  26. }

And we can use this synchronization handler by passing it to the overload of InitializeAsync in the sync context, as shown below:

  1. var store = newMobileServiceSQLiteStore(StoreFileName);
  2. store.DefineTable<TodoItem>();
  3. AddToDebug("Defined table in the store");
  4.  
  5. var syncHandler = newMySyncHandler(this);
  6. await client.SyncContext.InitializeAsync(store, syncHandler);
  7. AddToDebug("Initialized the sync context");

This context implementation doesn’t do much, but we can catch the exception which is thrown by the client when the server returns a Precondition Failed (HTTP status code 412) and retry the call again after updating the version on the client.

  1. classMySyncHandler : IMobileServiceSyncHandler
  2. {
  3.     MainPage page;
  4.     IMobileServiceClient client;
  5.  
  6.     public MySyncHandler(IMobileServiceClient client, MainPage page)
  7.     {
  8.         this.client = client;
  9.         this.page = page;
  10.     }
  11.  
  12.     publicasyncTask<JObject> ExecuteTableOperationAsync(IMobileServiceTableOperation operation)
  13.     {
  14.         JObject result = null;
  15.         MobileServicePreconditionFailedException conflictError = null;
  16.         do
  17.         {
  18.             try
  19.             {
  20.                 result = await operation.ExecuteAsync();
  21.             }
  22.             catch (MobileServicePreconditionFailedException e)
  23.             {
  24.                 conflictError = e;
  25.             }
  26.  
  27.             if (conflictError != null)
  28.             {
  29.                 // There was a conflict on the server. Let's "fix" it by
  30.                 // forcing the client entity
  31.                 JObject serverItem = conflictError.Value;
  32.  
  33.                 // In most cases, the server will return the server item in the request body
  34.                 // when a Precondition Failed is returned, but it's not guaranteed for all
  35.                 // backend types.
  36.                 if (serverItem == null)
  37.                 {
  38.                     serverItem = (JObject)(await operation.Table.LookupAsync((string)operation.Item[MobileServiceSystemColumns.Id]));
  39.                 }
  40.  
  41.                 // Now update the local item with the server version
  42.                 operation.Item[MobileServiceSystemColumns.Version] = serverItem[MobileServiceSystemColumns.Version];
  43.             }
  44.         } while (conflictError != null);
  45.  
  46.         return result;
  47.     }
  48.  
  49.     publicTask OnPushCompleteAsync(MobileServicePushCompletionResult result)
  50.     {
  51.         returnTask.FromResult(0);
  52.     }
  53. }

And this is how we can resolve conflicts on the client. This sample shows another conflict handling policy (letting the user choose which version to keep), but the structure is similar to the one above. And a final note about resolving synchronization conflicts: to use the optimistic concurrency feature (which prevents two clients from overriding modifications in the same row), you’ll need to define a version column in the class used in the client.

Advanced features

Some additional information about this release which I think is interesting. Unlike on remote tables, local tables can store arbitrary types, including complex ones (for example, a “Person” class can have an “Address” property), and when stored in the local table the complex property will be stored as a JSON-ified version of its value. You won’t be able to query on those types (for example, list all people whose “Address.City” property is “Springfield”), but it can be stored and retrieved without any extra code.

When writing a sync handler (like the one used in the previous section to resolve conflicts) you can also abort the whole push operation if you find an error for which you don’t want to continue. If this is the case, you can call the AbortPush method on the IMobileServiceTableOperation instance passed to the ExecuteTableOperationAsync method.

One more thing: the implementation of the local store uses the SQLite database, which is a x86/arm only binary. The project cannot be configured to “AnyCPU”, it needs to be configured to a specific architecture.

Wrapping up

In this release we introduced offline capabilities in the .NET SDK for Mobile Services. We released it as an alpha NuGet package so you can try and give us feedback on what works, what doesn’t and what we can improve. Please let us know in the comments, in our forum or via twitter @AzureMobile.

Comments

  • Anonymous
    April 08, 2014
    Is there anything I need to setup or configure on SQL Azure or on the Mobile service in order for offline data to work?
  • Anonymous
    April 09, 2014
    This looks great. I ended up rolling my own offline capabilities when I started my Win 8 app in 2012 using SQLite and Tim Heuer's SQLite-NET library, but it will be nice to have a more streamlined (and standardized) approach like this for newer projects.
  • Anonymous
    April 10, 2014
    Michael, no, the tables you create in the mobile service should work just fine. The offline data story requires tables with string ids (i.e., "old" tables with integer ids will not work), but that's the default in the portal.
  • Anonymous
    April 10, 2014
    Thank you Carlos. Just one more question. Is disconnected data available on Windows Phone too?
  • Anonymous
    April 11, 2014
    Yes, it also works for windows phone. You need to have SQLite installed for Windows Phone (sqlite.org/download.html) to get it to work, though.
  • Anonymous
    April 17, 2014
    Thank you Carlos. I am using WAMS as backend for sencha ExtJS applications. Will it be available for Javascript in the next weeks ? What do you suggest to use offline data for a sencha application ?thanks for all
  • Anonymous
    May 03, 2014
    What about scenarios where you have a single azure mobile service serving a large number of mobile app users, each with separate client data. I am building an app with SQLite locally and azure mobile services remotely, with user authentication and server scripts that limit data to a users own data. Will this very useful offline feature work in this scenario, and if so, are there any special considerations
  • Anonymous
    May 06, 2014
    Excellent posting - thanks. I am trying to get  this to work with Windows phone 8.1 but I cannot find a version of  SQLitePCL which will install for a WindowsPhoneApp 8.1. Do you know if this is available anywhere yet?
  • Anonymous
    May 18, 2014
    When it comes to technology, it is important to hire permanent IT staff members, and must invest in managed IT support. Good IT support companies are very good at fixing all problems regarding to your IT support, this is the best way to improve your companies efficiency and quality. Visit here for http://www.Belnis.com
  • Anonymous
    May 22, 2014
    Hi Dean, I'm about to build an app with exactly the same scenario. One AMS, a .Net backend service, Authenticated users, UserId stored on server tables.I want to synchronize only users elements into SQLite for an offline use but also beeing able to access to full table in an online use context.To do that, on the server side, I was thinking about creating one table like CustomElements wich hold all the data and one other like MyCustomElements wich is always empty.Then I plan to modify the MyCustomElementsController so that it will get user filtered data from the CustomElements table.On the client side, I'll create only the MyCustomElements table wich synchronize with the same table on the server.Well, I didn't try it now, but I think it should work. I don't know if there's other solution like creating a custom controller (non table) and create a synchronization with it on local app table...Does anyone know how to deal with this real world synchronization scenario?
  • Anonymous
    June 04, 2014
    The comment has been removed
  • Anonymous
    June 15, 2014
    Hi Carlos,Thank you for the detailed explanation of new features. Looking forward for the final release.Is there any plans for supporting SterlingDB? If so, an estimation date?
  • Anonymous
    March 30, 2015
    Is there a way to understandhow many data we're downloading in order to have a progress bar info?
  • Anonymous
    February 23, 2016
    The comment has been removed