Share via


Windows Azure Storage Logging: Using Logs to Track Storage Requests

Windows Azure Storage Logging provides a trace of the executed requests against your storage account (Blobs, Tables and Queues).  It allows you to monitor requests to your storage accounts, understand performance of individual requests, analyze usage of specific containers and blobs, and debug storage APIs at a request level.

What is logged?

You control what types of requests are logged for your account. We categorize requests into 3 kinds: READ, WRITE and DELETE operations. You can set the logging properties for each service indicating the types of operations they are interested in. For example, opting to have delete requests logged for blob service will result in blob or container deletes to be recorded.  Similarly, logging reads and writes will capture reads/writes on properties or the actual objects in the service of interest.  Each of these options must be explicitly set to true (data is captured) or false (no data is captured).

What requests are logged?

The following authenticated and anonymous requests are logged.

  • Authenticated Requests:
    • Signed requests to user data. This includes failed requests such as throttled, network exceptions, etc.
    • SAS (Shared Access Signature) requests whose authentication is validated
    • User requests to analytics data
  • Anonymous Requests:
    • Successful anonymous requests.
    • If an anonymous request results in one of the following errors, it will be logged
      • Server errors – Typically seen as InternalServerError or ServerBusy
      • Client and Server Timeout errors (seen as InternalServerError with StorageErrorCode = Timeout)
      • GET requests that fail with 304 i.e. Not Modified

The following requests are not logged

  • Authenticated Requests:
    • System requests to write and delete the analytics data
  • Anonymous Requests:
    • Failed anonymous requests (other than those listed above) are not logged so that the logging system is not spammed with invalid anonymous requests.

During normal operation all requests are logged; but it is important to note that  logging is provided on a best effort basis.  This means we do not guarantee that every message will be logged due to the fact that the log data is buffered in memory at the storage front-ends before being written out, and if a role is restarted then its buffer of logs would be lost.

What data do my logs contain and in what format?

Each request is represented by one or more log records.  A log record is a single line in the blob log and the fields in the log record are ‘;’ separated. We opted for a ‘;’ separated log file rather than a custom format or XML so that existing tools like LogParser, Excel  etc, can be easily extended or used to analyze the logs. Any field that may contain ‘;’ is enclosed in quotes and html encoded (using HtmlEncode method).

Each record will contain the following fields:

  1. Log Version (string): The log version. We currently use “1.0”.
  2. Transaction Start Time (timestamp): The UTC time at which the request was received by our service.
  3. REST Operation Type (string) – will be one of the REST APIs. See “Operation Type: What APIs are logged?” section below for more details.
  4. Request Status (string) – The status of request operation. See Transaction Status section below for more details.
  5. HTTP Status Code (string): E.g. “200”. This can also be “Unknown” in cases where communication with the client is interrupted before we can set the status code.
  6. E2E Latency (duration): The total time in milliseconds taken to complete a request in the Windows Azure Storage Service. This value includes the required processing time within Windows Azure Storage to read the request, send the response, and receive acknowledgement of the response.
  7. Server Latency (duration): The total processing time in milliseconds taken by the Windows Azure Storage Service to process a request. This value does not include the network latency specified in E2E Latency.
  8. Authentication type (string) – authenticated, SAS, or anonymous.
  9. Requestor Account Name (string): The account making the request. For anonymous and SAS requests this will be left empty.
  10. Owner Account Name (string): The owner of the object(s) being accessed.
  11. Service Type (string): The service the request was for (blob, queue, table).
  12. Request URL (string): The full request URL that came into the service – e.g. “PUT https://myaccount.blob.core.windows.net/mycontainer/myblob?comp=block&blockId=”. This is always quoted.
  13. Object Key (string): This is the key of the object the request is operating on. E.g. for Blob: “/myaccount/mycontainer/myblob”. E.g. for Queue: “/myaccount/myqueue”. E.g. For Table: “/myaccount/mytable/partitionKey/rowKey”. Note: If custom domain names are used, we still have the actual account name in this key, not the domain name. This field is always quoted.
  14. Request ID (guid): The x-ms-request-id of the request. This is the request id assigned by the service.
  15. Operation Number (int): There is a unique number for each operation executed for the request that is logged. Though most operations will just have “0” for this (See examples below), there are operations which will contain multiple entries for a single request.
    1. Copy Blob will have 3 entries in total and operation number can be used to distinguish them. The log entry for Copy will have operation number “0” and the source read and destination write will have 1 and 2 respectively.
    2. Table Batch command.  An example is a batch command with two Insert’s: the Batch would have “0” for operation number, the first insert would have “1”, and the second Insert would have “2”.
  16. Client IP (string): The client IP from which the request came.
  17. Request Version (string): The x-ms-version used to execute the request. This is the same x-ms-version response header that we return.
  18. Request Header  Size  (long): The size of the request header.
  19. Request Packet Size (long) : The size of the request payload read by the service.
  20. Response Header  Size  (long): The size of the response header.
  21. Response Packet Size (long) : The size of the response payload written by the service.
    NOTE: The above request and response sizes may not be filled if a request fails.
  22. Request Content Length (long): The value of Content-Length header. This should be the same size of Request Packet Size (except for error scenarios) and helps confirm the content length sent by clients.
  23. Request MD5 (string): The value of Content-MD5 (or x-ms-content-md5) header passed in the request. This is the MD5 that represents the content transmitted over the wire. For PutBlockList operation, this means that the value stored is for the content of the PutBlockList and not the blob itself.
  24. Server MD5 (string): The md5 value evaluated on the server. For PutBlob, PutPage, PutBlock, we store the server side evaluated content-md5 in this field even if MD5 was not sent in the request.
  25. ETag(string): For objects for which an ETag is returned, the ETag of the object is logged. Please note that we will not log this in operations that can return multiple objects. This field is always quoted.
  26. Last Modified Time (DateTime): For objects where a Last Modified Time (LMT) is returned, it will be logged.  If the LMT is not returned, it will be ‘‘(empty string).  Please note that we will not log this in operations that can return multiple objects.
  27. ConditionsUsed(string): Semicolon separated list of ConditionName=value. This is always quoted. ConditionName can be one of the following:
    1. If-Modified-Since
    2. If-Unmodified-Since
    3. If-Match
    4. If-None-Match
  28. User Agent (string): The User-Agent header.  
  29. Referrer (string): The “Referrer” header. We log up to the first 1 KB chars.
  30. Client Request ID (string): This is custom logging information which can be passed by the user via x-ms-client-request-id header (see below for more details). This is opaque to the service and has a limit of 1KB characters. This field is always quoted.

 

Examples

Put Blob
1.0;2011-07-28T18:02:40.6271789Z;PutBlob;Success;201;28;21;authenticated;sally;sally;blob;"https://sally.blob.core.windows.net/thumbnails/lake.jpg?timeout=30000";"/sally/thumbnails/lake.jpg";fb658ee6-6123-41f5-81e2-4bfdc178fea3;0;201.9.10.20;2009-09-19;438;100;223;0;100;;"66CbMXKirxDeTr82SXBKbg==";"0x8CE1B67AD25AA05";Thursday, 28-Jul-11 18:02:40 GMT;;;;"7/28/2011 6:02:40 PM ab970a57-4a49-45c4-baa9-20b687941e32"

Anonymous Get Blob
1.0;2011-07-28T18:52:40.9241789Z;GetBlob;AnonymousSuccess;200;18;10;anonymous;;sally;blob;"https:// sally.blob.core.windows.net/thumbnails/lake.jpg?timeout=30000";"/sally/thumbnails/lake.jpg";a84aa705-8a85-48c5-b064-b43bd22979c3;0;123.100.2.10;2009-09-19;252;0;265;100;0;;;"0x8CE1B6EA95033D5";Thursday, 28-Jul-11 18:52:40 GMT;;;;"7/28/2011 6:52:40 PM ba98eb12-700b-4d53-9230-33a3330571fc"

Copy Blob
Copy blob will have 3 log lines logged. The request id will be the same but operation id (highlighted in the examples below) will be incremented. The line with operation ID 0 represents the entire copy blob operation. The operation ID 1 represents the source blob retrieval for the copy. This operation is called CopyBlobSource. Operation ID 2 represents destination blob information and the operation is called CopyBlobDestination. The CopyBlobSource will not have information like request size or response size set and is meant only to provide information like name, conditions used etc. about the source blob.

1.0;2011-07-28T18:02:40.6526789Z;CopyBlob;Success;201;28;28;authenticated;account8ce1b67a9e80b35;sally;blob;"https://sally.blob.core.windows.net/thumbnails/lake.jpg?timeout=30000";"/sally/thumbnails/lakebck.jpg";85ba10a5-b7e2-495e-8033-588e08628c5d;0;268.20.203.21;2009-09-19;505;0;188;0;0;;;"0x8CE1B67AD473BC5";Thursday, 28-Jul-11 18:02:40 GMT;;;;"7/28/2011 6:02:40 PM 683803d3-538f-4ba8-bc7c-24c83aca5b1a"

1.0;2011-07-28T18:02:40.6526789Z;CopyBlobSource;Success;201;28;28;authenticated;sally;sally;blob;"https://sally.blob.core.windows.net/thumbnails/lake.jpg?timeout=30000";"/sally/thumbnails/d1c77aca-cf65-4166-9401-0fd3df7cf754";85ba10a5-b7e2-495e-8033-588e08628c5d;1;268.20.203.21;2009-09-19;505;0;188;0;0;;;;;;;;"7/28/2011 6:02:40 PM 683803d3-538f-4ba8-bc7c-24c83aca5b1a"

1.0;2011-07-28T18:02:40.6526789Z;CopyBlobDestination;Success;201;28;28;authenticated;sally;sally;blob;"https://sally.blob.core.windows.net/thumbnails/lake.jpg?timeout=30000";"/sally/thumbnails/lakebck.jpg";85ba10a5-b7e2-495e-8033-588e08628c5d;2;268.20.203.21;2009-09-19;505;0;188;0;0;;;;;;;;"7/28/2011 6:02:40 PM 683803d3-538f-4ba8-bc7c-24c83aca5b1a"

Table Batch Request or entity group requests with 2 inserts.
The first one stands for the batch request, and the latter two represent the actual inserts. NOTE: the key for the batch is the key of the first command in the batch. The operation number is used to identify different individual commands in the batch

1.0;2011-07-28T18:02:40.9701789Z;EntityGroupTransaction;Success;202;104;61;authenticated;sally;sally;table;"https://sally.table.core.windows.net/$batch";"/sally";b59c0c76-dc04-48b7-9235-80124f0066db;0;268.20.203.21;2009-09-19;520;100918;235;189150;100918;;;;;;;;"7/28/2011 6:02:40 PM 70b9cb5c-8e2f-4e8a-bf52-7b79fcdc56e7"

1.0;2011-07-28T18:02:40.9701789Z;InsertEntity;Success;202;104;61;authenticated;sally;sally;table;"https:// sally.table.core.windows.net/$batch";"/sally/orders/5f1a7147-28df-4398-acf9-7d587c828df9/a9225cef-04aa-4bbb-bf3b-b0391e3c436e";b59c0c76-dc04-48b7-9235-80124f0066db;1;268.20.203.21;2009-09-19;520;100918;235;189150;100918;;;"W/"datetime'2011-07-28T18%3A02%3A41.0086789Z'"";Thursday, 28-Jul-11 18:02:41 GMT;;;;"7/28/2011 6:02:40 PM 70b9cb5c-8e2f-4e8a-bf52-7b79fcdc56e7"

1.0;2011-07-28T18:02:40.9701789Z;InsertEntity;Success;202;104;61;authenticated;sally;sally;table;"https:// sally.table.core.windows.net/$batch";"/sally/orders/5f1a7147-28df-4398-acf9-7d587c828df9/a9225cef-04aa-4bbb-bf3b-b0391e3c435d";b59c0c76-dc04-48b7-9235-80124f0066db;2;268.20.203.21;2009-09-19;520;100918;235;189150;100918;;;"W/"datetime'2011-07-28T18%3A02%3A41.0086789Z'"";Thursday, 28-Jul-11 18:02:41 GMT;;;;"7/28/2011 6:02:40 PM 70b9cb5c-8e2f-4e8a-bf52-7b79fcdc56e7"

 

Operation Type: What APIs are logged?

The APIs recorded in the logs are listed by service below, which match the REST APIs for Windows Azure Storage. Note that for operations that have multiple operations executed (e.g. Batch) as part of them, the field OperationType will contain multiple records with a main record (that has Operation Number ‘0’) and an individual record for each sub operation.

Blob Service APIs
  • AcquireLease
  • BreakLease
  • ClearPage
  • CopyBlob
    • CopyBlobSource: Information about the source that was copied and is used only for logs
    • CopyBlobDestination: Information about the destination and is used only for logs
  • CreateContainer
  • DeleteBlob
  • DeleteContainer
  • GetBlob
  • GetBlobMetadata
  • GetBlobProperties
  • GetBlockList
  • GetContainerACL
  • GetContainerMetadata
  • GetContainerProperties
  • GetLeaseInfo
  • GetPageRegions
  • LeaseBlob
  • ListBlobs
  • ListContainers
  • PutBlob
  • PutBlockList
  • PutBlock
  • PutPage
  • ReleaseLease
  • RenewLease
  • SetBlobMetadata
  • SetBlobProperties
  • SetContainerACL
  • SetContainerMetadata
  • SnapshotBlob
  • SetBlobServiceProperties
  • GetBlobServiceProperties
 Queue Service APIs
  • ClearMessages
  • CreateQueue
  • DeleteQueue
  • DeleteMessage
  • GetQueueMetadata
  • GetQueue
  • GetMessage
  • GetMessages
  • ListQueues
  • PeekMessage
  • PeekMessages
  • PutMessage
  • SetQueueMetadata
  • SetQueueServiceProperties
  • GetQueueServiceProperties

Table Service APIs

  • EntityGroupTransaction
  • CreateTable
  • DeleteTable
  • DeleteEntity
  • InsertEntity
  • QueryEntity
  • QueryEntities
  • QueryTable
  • QueryTables
  • UpdateEntity
  • MergeEntity
  • SetTableServiceProperties
  • GetTableServiceProperties

 

Transaction Status

This table shows the different statuses that can be written to your log.

Status (this is a string field)

Description

Success

Indicates the request was successful

AnonymousSuccess

Indicates that the anonymous request was successful

SASSuccess

Indicates that the SAS request was successful

ThrottlingError

Indicates that the request was throttled with http status code ServerBusy i.e. 503.  The user is expected to back-off on throttling errors.

AnonymousThrottlingError

Indicates that the anonymous request was throttled with http status code ServerBusy i.e. 503.  The user is expected to back-off on throttling errors.

SASThrottlingError

Indicates that the SAS request was throttled with http status code ServerBusy i.e. 503.  The user is expected to back-off on throttling errors.

ClientTimeoutError

Indicates that the authenticated request timed out.

These are seen as timeout errors or Http Status code 500 with Storage error code as “Timeout”.

This is marked as client side timeout error when the user’s network IO is slow and the timeout provided was less than expected value for a slow network. In other words, if a request spends most time in reading/writing from/to client and does not allow sufficient time for the server to process, we will term these timeouts as ClientTimeoutErrror. To determine what is the minimum time required by a server to process a request, please refer to the times posted here.

Any other timeout will be deemed as ServerTimeout.

AnonymousClientTimeoutError

Indicates that the anonymous request timed out.

These are seen as timeout errors or Http Status code 500 with Storage error code as “Timeout”.

This is marked as client side timeout error when the user’s network IO is slow and the timeout provided was less than expected value for a slow network. In other words, if a request spends most time in reading/writing from/to client and does not allow sufficient time for the server to process, we will term these timeouts as AnonymousClientTimeoutErrror. To determine what is the minimum time required by a server to process a request, please refer to the times posted here.

Any other timeout will be deemed as AnonymousServerTimeout.

SASClientTimeoutError

Indicates that the SAS request timed out.

These are seen as timeout errors or Http Status code 500 with Storage error code as “Timeout”.

This is marked as client side timeout error when the user’s network IO is slow and the timeout provided was less than expected value for a slow network. In other words, if a request spends most time in reading/writing from/to client and does not allow sufficient time for the server to process, we will term these timeouts as SASClientTimeoutErrror. To determine what is the minimum time required by a server to process a request, please refer to the times posted here.

Any other timeout will be deemed as SASServerTimeout.

ServerTimeoutError

Indicates that the request timed out because of a problem on the server end. Users are expected to back-off on these errors.

These are seen as timeout errors or Http Status code 500 with Storage error code as “Timeout”.

AnonymousServerTimeoutError

Indicates that the anonymous request timed out because of a problem on the server end. Users are expected to back-off on these errors.

These are seen as timeout errors or Http Status code 500 with Storage error code as “Timeout”.

SASServerTimeoutError

Indicates that the SAS request timed out because of a problem on the server end. Users are expected to back-off on these errors.

These are seen as timeout errors or Http Status code 500 with Storage error code as “Timeout”.

ClientOtherError

Indicates that the authenticated request failed as expected.

Examples are Resource already exists, Resource not found etc.

SASClientOtherError

Indicates that the SAS request failed as expected.

Examples are Resource already exists, Resource not found etc.

AnonymousClientOtherError

Indicates that the anonymous requests failed precondition checks (like If- Modified etc.) for GET requests.

Examples: Conditional GET requests that fail the check.

ServerOtherError

Indicates that authenticated request failed due to unknown server error.

These are typically Http Status code 500 with Storage error code other than Timeout.

AnonymousServerOtherError

Indicates that anonymous request failed due to unknown server error.

These are typically Http Status code 500 with Storage error code other than Timeout.

SASServerOtherError

Indicates that SAS request failed due to unknown server error.

These are typically Http Status code 500 with Storage error code other than Timeout.

AuthorizationError

Indicates that the authenticated requests failed authorization check.

Example: write requests from users to  logs under $logs will be treated as Authorization error

SASAuthorizationError

Indicates that the SAS requests failed authorization check.

Example: write requests using SAS when only read access was provided Authorization error

NetworkError

Indicates that the authenticated request failed because of network error.

Example: Network errors occur when a user prematurely closes the connection before the timeout expires or if there are problems in any of the intermediate switches.

AnonymousNetworkError (long)

Indicates that the anonymous request failed because of a network error.

Example: Network errors occur when a user prematurely closes the connection before the timeout expires or if there are problems in any of the intermediate switches.

SASNetworkError

Indicates that the SAS request failed because of network error.

Example: Network errors occur when a user prematurely closes the connection before the timeout expires or if there are problems in any of the intermediate switches.

 

Client Request Id

One of the logged fields is called the Client Request Id.  A client can choose to pass this client perspective id up to 1KB in size as a HTTP header “x-ms-client-request-id” in with every request and it will be logged for the request.  Note, if you use the optional client request id header, it is used in constructing the canonicalized header, since all headers starting with “x-ms” are part of the resource canonicalization used for signing a request.

Since this is treated as an id that a client may associate with a request, it is very helpful to investigate requests that failed due to network or timeout errors. For example, you can search for requests in the log with the given client request id  to see if the request timed out, and see if the E2E latency indicates that there is a slow network connection. As noted above, some requests may not get logged due to node failures.

Where can I find the logs and how does the system store them?

The analytics logging data is stored as block blobs in a container called $logs in your blob namespace for the storage account being logged. The $logs container is created automatically when you turn logging on and once created this container cannot be deleted, though the blobs stored in the container can be deleted. The $logs container can be accessed using “https://<accountname>.blob.core.windows.net/$logs” URL.  Note that normal container listing operations that you issue to list your containers will not list the $logs container. But you can list the blobs inside the $logs container itself. Logs will be organized in the $logs namespace per hour by service type (Blob, Table and Queue). Log entries are written only if there are eligible requests to the service and the operation type matches the type opted for logging. For example, if there was no table activity on an account for an hour but we had blob activity, no logs would be created for the table service but we would have some for blob service. If in the next hour there is table activity, table logs would be created for that hour.

We use block blobs to store the log entries as block blobs represents files better than page blobs. In addition, the 2 phase write semantics of block blobs allows our system to write a set of log entries as a block in the block blob. Log entries are accumulated and written to a block when the size reaches 4 MB or if it has been up to 5 minutes since the entries have been flushed to a block. The system will commit the block blob if the size of the uncommitted blob reaches 150MB or if it has been up to 5 minutes since the first block was uploaded – whichever is reached first. Once a blob is committed, it is not updated with any more blocks of log entries. Since a block blob is available to read only after commit operation, you can be assured that once a log blob is committed it will never be updated again.

NOTE: Applications should not take any dependency on the above mentioned size and time trigger for flushing a log entry or committing the blob as it can change without notice.

What is the naming convention used for logs?

Logging in a distributed system is a challenging problem. What makes it challenging is the fact that there are many servers that can process requests for a single account and hence be a source for these log entries. Logs from various sources need to be combined into a single log. Moreover, clock skews cannot be ruled out and the number of log entries produced by a single account in a distributed system can easily run into thousands of log entries per second. To ensure that we provide a pattern to process these logs efficiently despite these challenges, we have a naming scheme and we store additional metadata for the logs that allow easy log processing.

The log name under the $logs container will have the following format:

<service name>/YYYY/MM/DD/hhmm/<Counter>.log

  • Service Name: “blob”, “table”, “queue”
  • YYYY – The four digit year for the log
  • MM – The two digit month for the log
  • DD – The two digit day for the log
  • hh – The two digit hour (24 hour format) representing the starting hour for all the logs. All timestamps are in UTC
  • mm – The two digit number representing the starting minute for all the logs. This is always 00 for this version and kept for future use
  • Counter – A zero based counter as there can be multiple logs generated for a single hour. The counter is padded to be 6 digits. The counter progress is based on the last log’s counter value within a given hour. Hence if you delete the last log blob, you may have the same blob name repeat again. It is recommended to not delete the blob logs right away.

The following are properties of the logs:

  • A request is logged based on when it ends. For example, if a request starts 13:57 and lasts for 5 minutes, it will make it into the log at the time it ended.  It will therefore appear in a log with hh=1400.
  • Log entries can be recorded out of order which implies that just inspecting the first and last log entry is not sufficient to figure if the log contains the time range you may be interested in.

To aid log analysis we store the following metadata for each of the log blobs:

  • LogType = The type of log entries that a log contains. It is described as combination of read, write and delete.  The types will be comma separated. This allows you to download blobs which have the operation type that you are interested in.
  • StartTime = The minimum time of a log entry in the log. It is of form YYYY-MM-DDThh:mm:ssZ. This represents the start time for a request that is logged in the blob.
  • EndTime = The maximum time of a log entry in the log of form YYYY-MM-DDThh:mm:ssZ. This represents the maximum start time of a request logged in the blob.
  • LogVersion = The version of the log format.  Currently 1.0. This can be used to process a given blob as all entries in the blob will conform to this version.

With the above you can list of all of the blobs with the “include=metadata” option to quickly see which blob logs have the given time range of logs for processing.

For example, assume we have a blob log that contains write events generated at 05:10:00 UTC on 2011/03/04 and contains requests that started at 05:02:30.0000001Z until 05:08:05.1000000X, the log name will be: $logs/blob/2011/03/04/0500/000000.log and the metadata will contain the following properties:

  • LogType=write
  • StartTime=2011-03-04T05:02:30.0000001Z
  • EndTime=2011-03-04T05:08:05.1000000Z
  • LogVersion=1.0

Note, duplicate log records may exist in logs generated for the same hour and can be detected by checking for duplicate RequestId and Operation number.

Operations on $logs container and manipulating log data

As we mentioned above, once you enable logging, your log data is stored as block blobs in a container called $logs in the blob namespace of your account. You can access your $logs container using https://<accountname>.blob.core.windows.net/$logs. To list your logs you can use the list blobs API. The logs are stored organized by service type (blob, table and queue) and are sorted by generation date/time within each service type.  The log name under the $logs container will have the following format: <service name>/YYYY/MM/DD/hhmm/<Counter>.log

The following operations are the operations allowed on the $logs container:

  • List blobs in $logs container. (Note that $logs will not be displayed in result of listing all containers in the account namespace).
  • Read committed blobs under $logs.
  • Delete specific logs. Note: logs can be deleted but the container itself cannot be deleted.
  • Get metadata/properties for the log.

An example request for each type of operation can be found below:

To improve enumerating the logs you can pass a prefix when using the list blobs API.  For example, to filter blobs by date/time the logs are generated on, you can pass a date/time as the prefix (blob/2011/04/24/) when using the list blobs API.

Listing all logs generated in the month of March in 2011 for the Blob service would be done as follows: https://myaccount.blob.core.windows.net/$logs?restype=container&comp=list&prefix=blob/2011/03

You can filter down to a specific hour using the prefix. For example, if you want to scan logs generated for March 24th 2011 at 6 PM (18:00), use
https://myaccount.blob.core.windows.net/$logs?restype=container&comp=list&prefix=blob/2011/03/24/1800/

It is important to note that log entries are written only if there are requests to the service.  For example, if there was no table activity on an account for an hour but you had blob activity, no logs would be created for the table service but you would have some for blob service.  If in the next hour there is table activity, table logs would be created for that hour.

What is the versioning story?

The following describes the versioning for logs:

  • The version is stored in the blob metadata and each log entry as the first field. 
  • All records within a single blob will have the same version.
  • When new fields are added, they may be added to the end and will not incur a version change if this is the case. Therefore, applications responsible for processing the logs should be designed to interpret only the first set of columns they are expecting and ignore any extra columns in a log entry.
  • Examples of when a version change could occur:
    • The representation needs to change for a particular field (example – data type changes).
    • A field needs to be removed.

Capacity and scale for your analytics account is separate from your ‘regular’ account.  There is separate 20TB allocated for analytics data (this includes both metrics and logs data).  This is not included as part of the 100TB limit for an individual account.  In addition, $logs are kept in a separate part of the namespace for the storage account, so it is throttled separately from the storage account, and requests issued by Windows Azure Storage to generate or delete these logs do not affect the per partition or per account scale targets described in the  Storage Scalability Targets blog post. 

How do I cleanup my logs?

To ease the management of your logs, we have provided the functionality of retention policy which will automatically cleanup ‘old’ logs without you being charged for the cleanup. It is recommended that you set a retention policy for logs such that your analytics data will be within the 20TB limit allowed for analytics data (logs and metrics combined) as described above.

A maximum of 365 days is allowed for retention policy. Once a retention policy is set, the system will delete the logs when logs age beyond the number of days set in the policy. This deletion will be done lazily in the background. Retention policy can be turned off at any time but if set, the retention policy is enforced even if logging is turned off. For example: If you set the retention policy for logging to be 10 days for blob service, then all the logs for blob service will be deleted if the content is > 10 days.   If you do not set a retention policy you can manage your data by manually deleting entities (like you delete entities in regular tables) whenever you wish to do so.

What charges occur due to logging?

The billable operations listed below are charged at the same rates applicable to all Windows Azure Storage operations. For more information on how these transactions are billed, see Understanding Windows Azure Storage Billing - Bandwidth, Transactions, and Capacity.

The capacity used by $logs is billable and the following actions performed by Windows Azure are billable:

  • Requests to create blobs for logging

The following actions performed by a client are billable:

  • Read and delete requests to $logs

If you have configured a data retention policy, you are not charged when Windows Azure Storage deletes old logging data. However, if you delete $logs data, your account is charged for the delete operations.

Turning Logging On

A REST API call, as shown below, is used to turn on Logging. In this example, logging is turned on for deletes and writes, but not for reads. The retention policy is enabled and set to ten days - so the analytics service will take care of deleting data older than ten days for you at no additional cost.  Note that you need to turn on logging separately for blobs, tables, and queues. The example below demonstrates how to turn logging on for blobs.

 Request:  
PUT https://sally.blob.core.windows.net/?restype=service&comp=properties 
HTTP/1.1
x-ms-version: 2009-09-19
x-ms-date: Fri, 25 Mar 2011 23:13:08 GMT
Authorization: SharedKey sally:zvfPm5cm8CiVP2VA9oJGYpHAPKxQb1HD44IWmubot0A=
Host: sally.blob.core.windows.net

<?xml version="1.0" encoding="utf-8"?>
<StorageServiceProperties>

    <Logging>
        <Version>1.0</Version>
        <Delete>true </Delete>
        <Read> false</Read>
        <Write>true </Write>
        <RetentionPolicy>
        <Enabled>true</Enabled>           
        <Days>7</Days>
    </RetentionPolicy>
    </Logging>

    <Metrics>
        <Version>1.0</Version>
        <Enabled>true</Enabled>           
        <IncludeAPIs>true</IncludeAPIs>
        <RetentionPolicy>
            <Enabled>true</Enabled>           
            <Days>10</Days>
        </RetentionPolicy>
    </Metrics>

</StorageServiceProperties >

Response:  
HTTP/1.1 202 
Content-Length: 0
Server: Windows-Azure-Metrics/1.0 Microsoft-HTTPAPI/2.0
x-ms-request-id: 4d02712a-a68a-4763-8f9b-c2526625be68
x-ms-version: 2009-09-19
Date: Fri, 25 Mar 2011 23:13:08 GMT

The above logging and metrics section shows you how to configure what you want to track in your analytics logs and metrics data.  The logging configuration values are described below:

  • Version - The version of Analytics Logging used to record the entry.
  • Delete (Boolean) – set to true if you want to track delete requests and false if you do not
  • Read (Boolean) – set to true if you want to track read requests and false if you do not
  • Write (Boolean) – set to true if you want to track write requests and false if you do not
  • Retention policy – this is where you set the retention policy to help you manage the size of your analytics data
    • Enabled (Boolean) – set to true if you want to enable a retention policy. We recommend that you do this.
    • Days (int) – the number of days you want to keep your analytics logging data. This can be a max of 365 days and a min of 1 day.

For more information, please see the MSDN Documentation. (this link will be live later today)

To turn on analytics, here are extensions to StorageClient library’s CloudBlobClient, CloudTableClient and CloudQueueClient. The extension methods and utility methods that accomplish this is just sample code in which error handling has been removed for brevity.

Let us start by listing a sample code that uses the extension samples:

 AnalyticsSettings settings = new AnalyticsSettings()
    {
        LogType = LoggingLevel.Delete | LoggingLevel.Read | LoggingLevel.Write,
        IsLogRetentionPolicyEnabled = true,
        LogRetentionInDays = 1,
        IsMetricsRetentionPolicyEnabled = true,
        MetricsRetentionInDays = 7,
        MetricsType = MetricsType.All
    };

// set the settings for each service
blobClient.SetServiceSettings(settings);
queueClient.SetServiceSettings(account.QueueEndpoint, settings);
tableClient.SetServiceSettings(settings);

// get the settings from each service
AnalyticsSettings blobSettings = blobClient.GetServiceSettings();
AnalyticsSettings tableSettings = tableClient.GetServiceSettings();
AnalyticsSettings queueSettings = queueClient.GetServiceSettings(account.QueueEndpoint);

We have added here a new self-explanatory settings class called AnalyticsSettings to contain the settings that can be set / retrieved. Each property listed in settings above has a property representing it.

 using System;

namespace AnalyticsSamples
{

    [Flags]
    public enum LoggingLevel
    {
        None = 0,
        Delete = 2,
        Write = 4,
        Read = 8,
    }

    [Flags]
    public enum MetricsType
    {
        None = 0x0,
        ServiceSummary = 0x1,
        ApiSummary = 0x2,
        All = ServiceSummary | ApiSummary,
    }

    /// <summary>
    /// The analytic settings that can set/get
    /// </summary>
    public class AnalyticsSettings
    {
        public static string Version = "1.0";

        public AnalyticsSettings()
        {
            this.LogType = LoggingLevel.None;
            this.LogVersion = AnalyticsSettings.Version;
            this.IsLogRetentionPolicyEnabled = false;
            this.LogRetentionInDays = 0;

            this.MetricsType = MetricsType.None;
            this.MetricsVersion = AnalyticsSettings.Version;
            this.IsMetricsRetentionPolicyEnabled = false;
            this.MetricsRetentionInDays = 0;
        }

        /// <summary>
        /// The type of logs subscribed for
        /// </summary>
        public LoggingLevel LogType { get; set; }

        /// <summary>
        /// The version of the logs
        /// </summary>
        public string LogVersion { get; set; }

        /// <summary>
        /// Flag indicating if retention policy is set for logs in $logs
        /// </summary>
        public bool IsLogRetentionPolicyEnabled { get; set; }

        /// <summary>
        /// The number of days to retain logs for under $logs container
        /// </summary>
        public int LogRetentionInDays { get; set; }

        /// <summary>
        /// The metrics version
        /// </summary>
        public string MetricsVersion { get; set; }

        /// <summary>
        /// A flag indicating if retention policy is enabled for metrics
        /// </summary>
        public bool IsMetricsRetentionPolicyEnabled { get; set; }

        /// <summary>
        /// The number of days to retain metrics data
        /// </summary>
        public int MetricsRetentionInDays { get; set; }

        private MetricsType metricsType = MetricsType.None;

        /// <summary>
        /// The type of metrics subscribed for
        /// </summary>
        public MetricsType MetricsType
        {
            get
            {
                return metricsType;
            }

            set
            {
                if (value == MetricsType.ApiSummary)
                {
                    throw new ArgumentException("Including just ApiSummary is invalid.");
                }

                this.metricsType = value;
            }
        }
    }
}

Now that we have covered the basic class, let us go over the extension class that provides the ability to set/get settings. This class provides extension methods SetServicesettings and GetServiceSettings on each one of the client objects. The rest is self-explanatory as the extension method takes the settings and then calls a single method to dispatch the settings by serializing/deserializing the settings class.

NOTE: Because CloudQueueClient does not expose the BaseUri property, the extension takes the base Uri explicitly.

 using System;
using System.Text;
using Microsoft.WindowsAzure.StorageClient;
using System.Globalization;
using System.Net;
using Microsoft.WindowsAzure;
using System.IO;
using System.Xml;

namespace AnalyticsSamples
{
    public static class AnalyticsSettingsExtension
    {
        static string RequestIdHeaderName = "x-ms-request-id";
        static string VersionHeaderName = "x-ms-version";
        static string Sep2009Version = "2009-09-19";
        static TimeSpan DefaultTimeout = TimeSpan.FromSeconds(30);

        #region Analytics
        /// <summary>
        /// Set blob analytics settings
        /// </summary>
        /// <param name="client"></param>
        /// <param name="settings"></param>
        public static void SetServiceSettings(this CloudBlobClient client, AnalyticsSettings settings)
        {
            SetSettings(client.BaseUri, client.Credentials, settings, false /* useSharedKeyLite */);
        }

        /// <summary>
        /// Set queue analytics settings
        /// </summary>
        /// <param name="client"></param>
        /// <param name="baseUri"></param>
        /// <param name="settings"></param>
        public static void SetServiceSettings(this CloudQueueClient client, Uri baseUri, AnalyticsSettings settings)
        {
            SetSettings(baseUri, client.Credentials, settings, false /* useSharedKeyLite */);
        }

        /// <summary>
        /// Set blob analytics settings
        /// </summary>
        /// <param name="client"></param>
        /// <param name="settings"></param>
        public static void SetServiceSettings(this CloudTableClient client, AnalyticsSettings settings)
        {
            SetSettings(client.BaseUri, client.Credentials, settings, true /* useSharedKeyLite */);
        }

        /// <summary>
        /// Set analytics settings
        /// </summary>
        /// <param name="baseUri"></param>
        /// <param name="credentials"></param>
        /// <param name="settings"></param>
        /// <param name="useSharedKeyLite"></param>
        public static void SetSettings(Uri baseUri, StorageCredentials credentials, AnalyticsSettings settings, bool useSharedKeyLite)
        {
            UriBuilder builder = new UriBuilder(baseUri);
            builder.Query = string.Format(
                CultureInfo.InvariantCulture,
                "comp=properties&restype=service&timeout={0}", 
                DefaultTimeout.TotalSeconds);

            HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(builder.Uri);
            request.Headers.Add(VersionHeaderName, Sep2009Version);
            request.Method = "PUT";

            StorageCredentialsAccountAndKey accountAndKey = credentials as StorageCredentialsAccountAndKey;
            using (MemoryStream buffer = new MemoryStream())
            {
                XmlTextWriter writer = new XmlTextWriter(buffer, Encoding.UTF8);
                SettingsSerializerHelper.SerializeAnalyticsSettings(writer, settings);
                writer.Flush();
                buffer.Seek(0, SeekOrigin.Begin);
                request.ContentLength = buffer.Length;

                if (useSharedKeyLite)
                {
                    credentials.SignRequestLite(request);
                }
                else
                {
                    credentials.SignRequest(request);
                }

                using (Stream stream = request.GetRequestStream())
                {
                    stream.Write(buffer.GetBuffer(), 0, (int)buffer.Length);
                }

                try
                {
                    using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
                    {
                        Console.WriteLine("Response Request Id = {0} Status={1}", response.Headers[RequestIdHeaderName], response.StatusCode);
                        if (HttpStatusCode.Accepted != response.StatusCode)
                        {
                            throw new Exception("Request failed with incorrect response status.");
                        }
                    }
                }
                catch (WebException e)
                {
                    Console.WriteLine(
                        "Response Request Id={0} Status={1}",
                        e.Response != null ? e.Response.Headers[RequestIdHeaderName] : "Response is null",
                        e.Status);
                    throw;
                }

            }
        }

        /// <summary>
        /// Get blob analytics settings
        /// </summary>
        /// <param name="client"></param>
        /// <returns></returns>
        public static AnalyticsSettings GetServiceSettings(this CloudBlobClient client)
        {
            return GetSettings(client.BaseUri, client.Credentials, false /* useSharedKeyLite */);
        }

        /// <summary>
        /// Get queue analytics settings
        /// </summary>
        /// <param name="client"></param>
        /// <param name="baseUri"></param>
        /// <returns></returns>
        public static AnalyticsSettings GetServiceSettings(this CloudQueueClient client, Uri baseUri)
        {
            return GetSettings(baseUri, client.Credentials, false /* useSharedKeyLite */);
        }

        /// <summary>
        /// Get table analytics settings
        /// </summary>
        /// <param name="client"></param>
        /// <returns></returns>
        public static AnalyticsSettings GetServiceSettings(this CloudTableClient client)
        {
            return GetSettings(client.BaseUri, client.Credentials, true /* useSharedKeyLite */);
        }

        /// <summary>
        /// Get analytics settings
        /// </summary>
        /// <param name="baseUri"></param>
        /// <param name="credentials"></param>
        /// <param name="useSharedKeyLite"></param>
        /// <returns></returns>
        public static AnalyticsSettings GetSettings(Uri baseUri, StorageCredentials credentials, bool useSharedKeyLite)
        {
            UriBuilder builder = new UriBuilder(baseUri);
            builder.Query = string.Format(
                CultureInfo.InvariantCulture,
                "comp=properties&restype=service&timeout={0}",
                DefaultTimeout.TotalSeconds);

            HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(builder.Uri);
            request.Headers.Add(VersionHeaderName, Sep2009Version);
            request.Method = "GET";

            StorageCredentialsAccountAndKey accountAndKey = credentials as StorageCredentialsAccountAndKey;

            if (useSharedKeyLite)
            {
                credentials.SignRequestLite(request);
            }
            else
            {
                credentials.SignRequest(request);
            }

            try
            {
                using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
                {
                    Console.WriteLine("Response Request Id={0} Status={1}", response.Headers[RequestIdHeaderName], response.StatusCode);

                    if (HttpStatusCode.OK != response.StatusCode)
                    {
                        throw new Exception("expected HttpStatusCode.OK");
                    }

                    using (Stream stream = response.GetResponseStream())
                    {
                        using (StreamReader streamReader = new StreamReader(stream))
                        {
                            string responseString = streamReader.ReadToEnd();
                            Console.WriteLine(responseString);

                            XmlReader reader = XmlReader.Create(new MemoryStream(ASCIIEncoding.UTF8.GetBytes(responseString)));
                            return SettingsSerializerHelper.DeserializeAnalyticsSettings(reader);
                        }
                    }
                }
            }
            catch (WebException e)
            {
                Console.WriteLine(
                    "Response Request Id={0} Status={1}",
                    e.Response != null ? e.Response.Headers[RequestIdHeaderName] : "Response is null",
                    e.Status);
                throw;
            }
        }
        #endregion
    }
}

Now to the crux of the code which handles serialization/deserialization. This code provides a SerializeAnalyticsSettings method that serializes AnalyticsSettings class into the format expected by the service and provides DeserializeAnalyticsSettings to reconstruct the AnalyticsSettings class from the response for GET REST method.

 using System.Xml;

namespace AnalyticsSamples
{
    public static class SettingsSerializerHelper
    {
        private const string RootPropertiesElementName = "StorageServiceProperties";
        private const string VersionElementName = "Version";
        private const string RetentionPolicyElementName = "RetentionPolicy";
        private const string RetentionPolicyEnabledElementName = "Enabled";
        private const string RetentionPolicyDaysElementName = "Days";

        private const string LoggingElementName = "Logging";
        private const string ApiTypeDeleteElementName = "Delete";
        private const string ApiTypeReadElementName = "Read";
        private const string ApiTypeWriteElementName = "Write";

        private const string MetricsElementName = "Metrics";
        private const string IncludeApiSummaryElementName = "IncludeAPIs";
        private const string MetricsEnabledElementName = "Enabled";

        private const int MaximumRetentionDays = 365;

        /// <summary>
        /// Reads the settings provided from stream
        /// </summary>
        /// <param name="xmlReader"></param>
        /// <returns></returns>
        public static AnalyticsSettings DeserializeAnalyticsSettings(XmlReader xmlReader)
        {
            // Read the root and check if it is empty or invalid
            xmlReader.Read();
            xmlReader.ReadStartElement(SettingsSerializerHelper.RootPropertiesElementName);

            AnalyticsSettings settings = new AnalyticsSettings();

            while (true)
            {
                if (xmlReader.IsStartElement(SettingsSerializerHelper.LoggingElementName))
                {
                    DeserializeLoggingElement(xmlReader, settings);
                }
                else if (xmlReader.IsStartElement(SettingsSerializerHelper.MetricsElementName))
                {
                    DeserializeMetricsElement(xmlReader, settings);
                }
                else
                {
                    break;
                }
            }

            xmlReader.ReadEndElement();

            return settings;
        }


        /// <summary>
        /// Write the settings provided to stream
        /// </summary>
        /// <param name="inputStream"></param>
        /// <returns></returns>
        public static void SerializeAnalyticsSettings(XmlWriter xmlWriter, AnalyticsSettings settings)
        {
            xmlWriter.WriteStartDocument();
            xmlWriter.WriteStartElement(SettingsSerializerHelper.RootPropertiesElementName);

            //LOGGING STARTS HERE
            xmlWriter.WriteStartElement(SettingsSerializerHelper.LoggingElementName);

            xmlWriter.WriteStartElement(SettingsSerializerHelper.VersionElementName);
            xmlWriter.WriteValue(settings.LogVersion);
            xmlWriter.WriteEndElement();

            bool isReadEnabled = (settings.LogType & LoggingLevel.Read) != LoggingLevel.None;
            xmlWriter.WriteStartElement(SettingsSerializerHelper.ApiTypeReadElementName);
            xmlWriter.WriteValue(isReadEnabled);
            xmlWriter.WriteEndElement();

            bool isWriteEnabled = (settings.LogType & LoggingLevel.Write) != LoggingLevel.None;
            xmlWriter.WriteStartElement(SettingsSerializerHelper.ApiTypeWriteElementName);
            xmlWriter.WriteValue(isWriteEnabled);
            xmlWriter.WriteEndElement();

            bool isDeleteEnabled = (settings.LogType & LoggingLevel.Delete) != LoggingLevel.None;
            xmlWriter.WriteStartElement(SettingsSerializerHelper.ApiTypeDeleteElementName);
            xmlWriter.WriteValue(isDeleteEnabled);
            xmlWriter.WriteEndElement();

            SerializeRetentionPolicy(xmlWriter, settings.IsLogRetentionPolicyEnabled, settings.LogRetentionInDays);
            xmlWriter.WriteEndElement(); // logging element

            //METRICS STARTS HERE
            xmlWriter.WriteStartElement(SettingsSerializerHelper.MetricsElementName);

            xmlWriter.WriteStartElement(SettingsSerializerHelper.VersionElementName);
            xmlWriter.WriteValue(settings.MetricsVersion);
            xmlWriter.WriteEndElement();

            bool isServiceSummaryEnabled = (settings.MetricsType & MetricsType.ServiceSummary) != MetricsType.None;
            xmlWriter.WriteStartElement(SettingsSerializerHelper.MetricsEnabledElementName);
            xmlWriter.WriteValue(isServiceSummaryEnabled);
            xmlWriter.WriteEndElement();

            if (isServiceSummaryEnabled)
            {
                bool isApiSummaryEnabled = (settings.MetricsType & MetricsType.ApiSummary) != MetricsType.None;
                xmlWriter.WriteStartElement(SettingsSerializerHelper.IncludeApiSummaryElementName);
                xmlWriter.WriteValue(isApiSummaryEnabled);
                xmlWriter.WriteEndElement();
            }

            SerializeRetentionPolicy(
                xmlWriter,
                settings.IsMetricsRetentionPolicyEnabled,
                settings.MetricsRetentionInDays);
            xmlWriter.WriteEndElement(); // metrics 

            xmlWriter.WriteEndElement(); // root element
            xmlWriter.WriteEndDocument();
        }

        private static void SerializeRetentionPolicy(XmlWriter xmlWriter, bool isRetentionEnabled, int days)
        {
            xmlWriter.WriteStartElement(SettingsSerializerHelper.RetentionPolicyElementName);

            xmlWriter.WriteStartElement(SettingsSerializerHelper.RetentionPolicyEnabledElementName);
            xmlWriter.WriteValue(isRetentionEnabled);
            xmlWriter.WriteEndElement();

            if (isRetentionEnabled)
            {
                xmlWriter.WriteStartElement(SettingsSerializerHelper.RetentionPolicyDaysElementName);
                xmlWriter.WriteValue(days);
                xmlWriter.WriteEndElement();
            }

            xmlWriter.WriteEndElement(); // Retention policy for logs
        }

        /// <summary>
        /// Reads the logging element and fills in the values in Analyticssettings instance
        /// </summary>
        /// <param name="xmlReader"></param>
        /// <param name="settings"></param>
        private static void DeserializeLoggingElement(
            XmlReader xmlReader,
            AnalyticsSettings settings)
        {
            // Read logging element
            xmlReader.ReadStartElement(SettingsSerializerHelper.LoggingElementName);

            while (true)
            {
                if (xmlReader.IsStartElement(SettingsSerializerHelper.VersionElementName))
                {
                    settings.LogVersion = xmlReader.ReadElementString(SettingsSerializerHelper.VersionElementName);
                }
                else if (xmlReader.IsStartElement(SettingsSerializerHelper.ApiTypeReadElementName))
                {
                    if (DeserializeBooleanElementValue(
                        xmlReader,
                        SettingsSerializerHelper.ApiTypeReadElementName))
                    {
                        settings.LogType = settings.LogType | LoggingLevel.Read;
                    }
                }
                else if (xmlReader.IsStartElement(SettingsSerializerHelper.ApiTypeWriteElementName))
                {
                    if (DeserializeBooleanElementValue(
                        xmlReader,
                        SettingsSerializerHelper.ApiTypeWriteElementName))
                    {
                        settings.LogType = settings.LogType | LoggingLevel.Write;
                    }
                }
                else if (xmlReader.IsStartElement(SettingsSerializerHelper.ApiTypeDeleteElementName))
                {
                    if (DeserializeBooleanElementValue(
                        xmlReader,
                        SettingsSerializerHelper.ApiTypeDeleteElementName))
                    {
                        settings.LogType = settings.LogType | LoggingLevel.Delete;
                    }
                }
                else if (xmlReader.IsStartElement(SettingsSerializerHelper.RetentionPolicyElementName))
                {
                    // read retention policy for logging
                    bool isRetentionEnabled = false;
                    int retentionDays = 0;
                    DeserializeRetentionPolicy(xmlReader, ref isRetentionEnabled, ref retentionDays);
                    settings.IsLogRetentionPolicyEnabled = isRetentionEnabled;
                    settings.LogRetentionInDays = retentionDays;
                }
                else
                {
                    break;
                }
            }

            xmlReader.ReadEndElement();// end Logging element
        }

        /// <summary>
        /// Reads the metrics element and fills in the values in Analyticssettings instance
        /// </summary>
        /// <param name="xmlReader"></param>
        /// <param name="settings"></param>
        private static void DeserializeMetricsElement(
            XmlReader xmlReader,
            AnalyticsSettings settings)
        {
            bool includeAPIs = false;

            // read the next element - it should be metrics. 
            xmlReader.ReadStartElement(SettingsSerializerHelper.MetricsElementName);

            while (true)
            {
                if (xmlReader.IsStartElement(SettingsSerializerHelper.VersionElementName))
                {
                    settings.MetricsVersion = xmlReader.ReadElementString(SettingsSerializerHelper.VersionElementName);
                }
                else if (xmlReader.IsStartElement(SettingsSerializerHelper.MetricsEnabledElementName))
                {
                    if (DeserializeBooleanElementValue(
                        xmlReader,
                        SettingsSerializerHelper.MetricsEnabledElementName))
                    {
                        // only if metrics is enabled will we read include API
                        settings.MetricsType = settings.MetricsType | MetricsType.ServiceSummary;
                    }
                }
                else if (xmlReader.IsStartElement(SettingsSerializerHelper.IncludeApiSummaryElementName))
                {
                    if (DeserializeBooleanElementValue(
                        xmlReader,
                        SettingsSerializerHelper.IncludeApiSummaryElementName))
                    {
                        includeAPIs = true;
                    }
                }
                else if (xmlReader.IsStartElement(SettingsSerializerHelper.RetentionPolicyElementName))
                {
                    // read retention policy for metrics
                    bool isRetentionEnabled = false;
                    int retentionDays = 0;
                    DeserializeRetentionPolicy(xmlReader, ref isRetentionEnabled, ref retentionDays);
                    settings.IsMetricsRetentionPolicyEnabled = isRetentionEnabled;
                    settings.MetricsRetentionInDays = retentionDays;
                }
                else
                {
                    break;
                }
            }

            if ((settings.MetricsType & MetricsType.ServiceSummary) != MetricsType.None)
            {
                // If Metrics is enabled, IncludeAPIs must be included.
                if (includeAPIs)
                {
                    settings.MetricsType = settings.MetricsType | MetricsType.ApiSummary;
                }
            }

            xmlReader.ReadEndElement();// end metrics element
        }


        /// <summary>
        /// Reads the retention policy in logging and metrics elements 
        /// and fills in the values in Analyticssettings instance.
        /// </summary>
        /// <param name="xmlReader"></param>
        /// <param name="isRetentionEnabled"></param>
        /// <param name="retentionDays"></param>
        private static void DeserializeRetentionPolicy(
            XmlReader xmlReader,
            ref bool isRetentionEnabled,
            ref int retentionDays)
        {
            xmlReader.ReadStartElement(SettingsSerializerHelper.RetentionPolicyElementName);

            while (true)
            {
                if (xmlReader.IsStartElement(SettingsSerializerHelper.RetentionPolicyEnabledElementName))
                {
                    isRetentionEnabled = DeserializeBooleanElementValue(
                        xmlReader,
                        SettingsSerializerHelper.RetentionPolicyEnabledElementName);
                }
                else if (xmlReader.IsStartElement(SettingsSerializerHelper.RetentionPolicyDaysElementName))
                {
                    string intValue = xmlReader.ReadElementString(
                        SettingsSerializerHelper.RetentionPolicyDaysElementName);
                    retentionDays = int.Parse(intValue);
                }
                else
                {
                    break;
                }
            }

            xmlReader.ReadEndElement(); // end reading retention policy
        }

        /// <summary>
        /// Read a boolean value for xml element
        /// </summary>
        /// <param name="xmlReader"></param>
        /// <param name="elementToRead"></param>
        /// <returns></returns>
        private static bool DeserializeBooleanElementValue(
            XmlReader xmlReader,
            string elementToRead)
        {
            string boolValue = xmlReader.ReadElementString(elementToRead);
            return bool.Parse(boolValue);
        }
    }
}

Downloading your log data

Since listing normal containers does not list out $logs container, existing tools will not be able to display these logs. In absence of existing tools, we wanted to provide a quick reference application with source code to make this data accessible.

The following application takes the service to download the logs for, the start time and end time for log entries and a file to export to. It then exports all log entries to the file in a csv format.

For example the following command will select all logs that have log entries in the provided time range and download all the log entries in those logs into a file called mytablelogs.txt:

DumpLogs table .\mytablelogs.txt  “2011-07-26T22:00Z” “2011-07-26T23:30Z”

 const string ConnectionStringKey = "ConnectionString";
const string LogStartTime = "StartTime";
const string LogEndTime = "EndTime";

static void Main(string[] args)
{
    if (args.Length < 3 || args.Length > 4)
    {
        Console.WriteLine("Usage: DumpLogs <service to search - blob|table|queue> <output file name> <Start time in UTC> <Optional End time in UTC>.");
        Console.WriteLine("Example: DumpLogs blob test.txt \"2011-06-26T22:30Z\" \"2011-06-26T22:50Z\"");
        return;
    }

    string connectionString = ConfigurationManager.AppSettings[ConnectionStringKey];

    CloudStorageAccount account = CloudStorageAccount.Parse(connectionString);
    CloudBlobClient blobClient = account.CreateCloudBlobClient();

    DateTime startTimeOfSearch = DateTime.Parse(args[2]);
    DateTime endTimeOfSearch = DateTime.UtcNow;

    if (args.Length == 4)
    {
        endTimeOfSearch = DateTime.Parse(args[3]);
    }

    List<CloudBlob> blobList = ListLogFiles(blobClient, args[0], startTimeOfSearch.ToUniversalTime(), endTimeOfSearch.ToUniversalTime());
    DumpLogs(blobList, args[1]);
}



/// <summary>
/// Given service name, start time for search and end time for search, creates a prefix that can be used
/// to efficiently get a list of logs that may match the search criteria
/// </summary>
/// <param name="service"></param>
/// <param name="startTime"></param>
/// <param name="endTime"></param>
/// <returns></returns>
static string GetSearchPrefix(string service, DateTime startTime, DateTime endTime)
{
    StringBuilder prefix = new StringBuilder("$logs/");

    prefix.AppendFormat("{0}/", service);

    // if year is same then add the year
    if (startTime.Year == endTime.Year)
    {
        prefix.AppendFormat("{0}/", startTime.Year);
    }
    else
    {
        return prefix.ToString();
    }

    // if month is same then add the month
    if (startTime.Month == endTime.Month)
    {
        prefix.AppendFormat("{0:D2}/", startTime.Month);
    }
    else
    {
        return prefix.ToString();
    }

    // if day is same then add the day
    if (startTime.Day == endTime.Day)
    {
        prefix.AppendFormat("{0:D2}/", startTime.Day);
    }
    else
    {
        return prefix.ToString();
    }

    // if hour is same then add the hour
    if (startTime.Hour == endTime.Hour)
    {
        prefix.AppendFormat("log-{0:D2}00", startTime.Hour);
    }

    return prefix.ToString();
}

/// <summary>
/// Given a service, start time, end time, provide list of log files
/// </summary>
/// <param name="blobClient"></param>
/// <param name="serviceName">The name of the service interested in</param>
/// <param name="startTimeForSearch">Start time for the search</param>
/// <param name="endTimeForSearch">End time for the search</param>
/// <returns></returns>
static List<CloudBlob> ListLogFiles(CloudBlobClient blobClient, string serviceName, DateTime startTimeForSearch, DateTime endTimeForSearch)
{
    List<CloudBlob> selectedLogs = new List<CloudBlob>();

    // form the prefix to search. Based on the common parts in start and end time, this prefix is formed
    string prefix = GetSearchPrefix(serviceName, startTimeForSearch, endTimeForSearch);

    Console.WriteLine("Prefix used for log listing = {0}", prefix);

    // List the blobs using the prefix
    IEnumerable<IListBlobItem> blobs = blobClient.ListBlobsWithPrefix(
        prefix,
        new BlobRequestOptions()
        {
            UseFlatBlobListing = true,
            BlobListingDetails = BlobListingDetails.Metadata
        });


    // iterate through each blob and figure the start and end times in the metadata
    foreach (IListBlobItem item in blobs)
    {
        CloudBlob log = item as CloudBlob;
        if (log != null)
        {
            // we will exclude the file if the file does not have log entries in the interested time range.
            DateTime startTime = DateTime.Parse(log.Metadata[LogStartTime]).ToUniversalTime();
            DateTime endTime = DateTime.Parse(log.Metadata[LogEndTime]).ToUniversalTime();

            bool exclude = (startTime > endTimeForSearch || endTime < startTimeForSearch);

            Console.WriteLine("{0} Log {1} Start={2:U} End={3:U}.",
                exclude ? "Ignoring" : "Selected",
                log.Uri.AbsoluteUri,
                startTime,
                endTime);

            if (!exclude)
            {
                selectedLogs.Add(log);
            }
        }
    }

    return selectedLogs;
}


/// <summary>
/// Dump all log entries to file irrespective of the time.
/// </summary>
/// <param name="blobList"></param>
/// <param name="fileName"></param>
static void DumpLogs(List<CloudBlob> blobList, string fileName)
{

    if (blobList.Count > 0)
    {
        Console.WriteLine("Dumping log entries from {0} files to '{1}'", blobList.Count, fileName);
    }
    else
    {
        Console.WriteLine("No logs files have selected.");
    }

    using (StreamWriter writer = new StreamWriter(fileName))
    {
        writer.Write(
            "Log version; Transaction Start Time; REST Operation Type; Transaction Status; HTTP Status; E2E Latency; Server Latency; Authentication Type; Accessing Account; Owner Account; Service Type;");
        writer.Write(
            "Request url; Object Key; RequestId; Operation #; User IP; Request Version; Request Header Size; Request Packet Size; Response Header Size;");
        writer.WriteLine(
            "Response Packet Size; Request Content Length; Request MD5; Server MD5; Etag returned; LMT; Condition Used; User Agent; Referrer; Client Request Id");
        foreach (CloudBlob blob in blobList)
        {
            using (Stream stream = blob.OpenRead())
            {
                using (StreamReader reader = new StreamReader(stream))
                {
                    string logEntry;
                    while ((logEntry = reader.ReadLine()) != null)
                    {
                        writer.WriteLine(logEntry);
                    }
                }
            }
        }
    }
}

 

Case Study

To put the power of log analytics into perspective, we will end this post with a sample application. Below, we cover a very simple console application that allows search (e.g., grep) functionality over logs.  One can imagine extending this to download the logs and analyze it and store data in structured storage such as Windows Azure Tables or SQL Azure for additional querying over the data.

Description: A console program that takes as input: service name to search for, log type to search, start time for the search, end time for the search and the keyword to search for in the logs. The output is log entries that match the search criteria and contains the keyword.

We will start with the method “ListLogFiles”. The method will use the input arguments to create a prefix for the blob listing operation. This method makes use of the utility methods GetSearchPrefix to get the prefix for listing operation. The prefix uses service name, start and end time to format the prefix. The start and end times are compared and only the parts that match are used in the prefix search. For example: If start time is "2011-06-27T02:50Z" and end time is "2011-06-27T03:08Z", then the prefix for blob service will be: “$logs/blob/2011/06/27/”. If the hour had matched then the prefix would be: “$logs/blob/2011/06/27/0200”.  The listing result contains metadata for each blob in the result.  The start time, end time and logging level is then matched to see if the log contains any entries that may match the time criteria. If it does, then we add the log to the list of logs we will be interested in downloading. Otherwise, we skip the log file.

NOTE: Exception handling and parameter validation is omitted for brevity.

 /// <summary>
/// Given a service, start time, end time, and operation types (i.e. READ/WRITE/DELETE) to search for, this method
/// iterates through blob logs and selects the ones that match the service and time range.
/// </summary>
/// <param name="blobClient"></param>
/// <param name="serviceName">The name of the service interested in</param>
/// <param name="startTimeForSearch">Start time for the search</param>
/// <param name="endTimeForSearch">End time for the search</param>
/// <param name="operationTypes">A ',' separated operation types used as logging level</param>
/// <returns></returns>
static List<CloudBlob> ListLogFiles(
CloudBlobClient blobClient, 
string serviceName, 
DateTime startTimeForSearch, 
DateTime endTimeForSearch, 
string operationTypes)
{
    List<CloudBlob> selectedLogs = new List<CloudBlob>();
            
    // convert a ',' separated log type to a "flag" enum 
    LoggingLevel loggingLevelsToFind = GetLoggoingLevel(operationTypes);

    // form the prefix to search. Based on the common parts in start and end time, this prefix is formed
    string prefix = GetSearchPrefix(serviceName, startTimeForSearch, endTimeForSearch);

    Console.WriteLine("Prefix used = {0}", prefix);

    // List the blobs using the prefix
    IEnumerable<IListBlobItem> blobs = blobClient.ListBlobsWithPrefix(
        prefix, 
        new BlobRequestOptions()
        {
            UseFlatBlobListing = true,
            BlobListingDetails = BlobListingDetails.Metadata
        });
            
                           
    // iterate through each blob and figure the start and end times in the metadata
    foreach (IListBlobItem item in blobs)
    {                            
        CloudBlob log = item as CloudBlob;
        if (log != null)
        {
            DateTime startTime = DateTime.Parse(log.Metadata[LogStartTime]).ToUniversalTime();
            DateTime endTime = DateTime.Parse(log.Metadata[LogEndTime]).ToUniversalTime();
            string logTypes = log.Metadata[LogEntryTypes].ToUpper();

            LoggingLevel levelsInLog = GetLoggoingLevel(logTypes);

            // we will exclude the file if the time range does not match or it does not contain the log type 
            // we are searching for
            bool exclude = (startTime > endTimeForSearch 
                || endTime < startTimeForSearch 
                || (loggingLevelsToFind & levelsInLog) == LoggingLevel.None);

            Console.WriteLine("{0} Log {1} Start={2:U} End={3:U} Types={4}.",
                exclude ? "Ignoring" : "Selected",
                log.Uri.AbsoluteUri,
                startTime,
                endTime,
                logTypes);
                    
            if (!exclude)
            {
                selectedLogs.Add(log);
            }
        }
    }
       
    return selectedLogs;
}

/// <summary>
/// Given service name, start time for search and end time for search, creates a prefix that can be used
/// to efficiently get a list of logs that may match the search criteria
/// </summary>
/// <param name="service"></param>
/// <param name="startTime"></param>
/// <param name="endTime"></param>
/// <returns></returns>
static string GetSearchPrefix(string service, DateTime startTime, DateTime endTime)
{
    StringBuilder prefix = new StringBuilder("$logs/");
            
    prefix.AppendFormat("{0}/", service);
            
    // if year is same then add the year
    if (startTime.Year == endTime.Year)
    {
        prefix.AppendFormat("{0}/", startTime.Year);
    }
    else
    {
        return prefix.ToString();
    }

    // if month is same then add the month
    if (startTime.Month == endTime.Month)
    {
        prefix.AppendFormat("{0:D2}/", startTime.Month);
    }
    else
    {
        return prefix.ToString();
    }

    // if day is same then add the day
    if (startTime.Day == endTime.Day)
    {
        prefix.AppendFormat("{0:D2}/", startTime.Day);
    }
    else
    {
        return prefix.ToString();
    }

    // if hour is same then add the hour
    if (startTime.Hour == endTime.Hour)
    {
        prefix.AppendFormat("{0:D2}00", startTime.Hour);
    }

    return prefix.ToString();
}

 

Once we have a list of logs, then we just need to iterate and see if the log entry in the log contains the keyword.

 /// <summary>
/// Displays all log entries containing a keyword
/// </summary>
/// <param name="blobList"></param>
/// <param name="keyword"></param>
static void DisplayLogs(List<CloudBlob> blobList, string keyword)
{
    Console.WriteLine("Log entries matching '{0}' are: ", keyword);

    foreach (CloudBlob blob in blobList)
    {
        using (Stream stream = blob.OpenRead())
        {
            using (StreamReader reader = new StreamReader(stream))
            {
                string logEntry;
                while((logEntry = reader.ReadLine()) != null)
                {
                    if (logEntry.Contains(keyword))
                    {
                        Console.WriteLine(logEntry);
                    }
                }
            }
        }
    }

    Console.WriteLine("End searching for '{0}'", keyword);
}


/// <summary>
/// Given a ',' separated list of log types aka operation types, it returns the logging level 
/// </summary>
/// <param name="loggingLevels"></param>
/// <returns></returns>
static LoggingLevel GetLoggoingLevel(string loggingLevels)
{
    string[] loggingLevelList = loggingLevels.ToUpper().Split(',');
    LoggingLevel level = LoggingLevel.None;
    foreach (string logLevel in loggingLevelList)
    {
        if (string.Equals("READ", logLevel))
        {
            level |= LoggingLevel.Read;
        }
        else if (string.Equals("WRITE", logLevel))
        {
            level |= LoggingLevel.Write;
        }
        else if (string.Equals("DELETE", logLevel))
        {
            level |= LoggingLevel.Delete;
        }
    }

    return level;
}

[Flags]
public enum LoggingLevel
{
     None = 0,
     Delete = 2,
     Write = 4,
     Read = 8,
 }

The main method then just invokes ListLogFiles and DisplayLogs.

 static void Main(string[] args)
{
    if (args.Length < 4)
    {
        Console.WriteLine("Usage: LogFind <service to search - blob|table|queue> <Comma separated lit of operations> <Start time in UTC for search> <End time in UTC for search> <search string>.");
        Console.WriteLine("Example: LogFind blob \"WRITE,DELETE\" \"2011-06-26T22:30Z\" \"2011-06-26T22:50Z\" \"DeleteContainer\"");
        return;
    }

    CloudStorageAccount account = CloudStorageAccount.Parse(ConnectionString);
    CloudBlobClient blobClient = account.CreateCloudBlobClient();
           
    DateTime startTimeOfSearch = DateTime.Parse(args[2]).ToUniversalTime();
    DateTime endTimeOfSearch = DateTime.Parse(args[3]).ToUniversalTime();

    List<CloudBlob> blobList = ListLogFiles(blobClient, args[0], startTimeOfSearch, endTimeOfSearch, args[1]);
    DisplayLogs(blobList, args[4]);
}

Given the above methods, to search for log entries between 02:50 and 3:05 for DeleteContainer, the following can be executed. The spew from the console application is self-describing. It lists all the logs that are created between the two dates and then selects only the ones that have “Delete” in the LogType and then once it gets the eligible logs, it downloads it and then outputs any lines in the log that contains the search keyword:

 LogFind blob "DELETE" "2011-06-27T02:50Z" "2011-06-27T03:05Z" "DeleteContainer"

This could give the following example results in the command line window:

 This could give the following example results in the command line window:
Prefix used = $logs/blob/2011/06/27/
Selected Log https://gameusnorth.blob.core.windows.net/$logs/blob/2011/06/27/0200-000000 Start=Monday, June 27, 2011 2:58:06 AM End=Monday, June 27, 2011 2:59:45 AM Types=READ,WRITE,DELETE.
Ignoring Log https://gameusnorth.blob.core.windows.net/$logs/blob/2011/06/27/0300-000000 Start=Monday, June 27, 2011 3:00:25 AM End=Monday, June 27, 2011 3:53:42 AM Types=READ,WRITE.
Ignoring Log https://gameusnorth.blob.core.windows.net/$logs/blob/2011/06/27/0500-000000 Start=Monday, June 27, 2011 5:03:47 AM End=Monday, June 27, 2011 5:03:47 AM Types=READ.
Ignoring Log https://gameusnorth.blob.core.windows.net/$logs/blob/2011/06/27/0600-000000 Start=Monday, June 27, 2011 6:05:48 AM End=Monday, June 27, 2011 6:13:31 AM Types=READ.
Ignoring Log https://gameusnorth.blob.core.windows.net/$logs/blob/2011/06/27/0600-000001 Start=Monday, June 27, 2011 6:11:55 AM End=Monday, June 27, 2011 6:14:23 AM Types=READ.
Ignoring Log https://gameusnorth.blob.core.windows.net/$logs/blob/2011/06/27/0600-000002 Start=Monday, June 27, 2011 6:40:55 AM End=Monday, June 27, 2011 6:41:56 AM Types=READ.
Ignoring Log https://gameusnorth.blob.core.windows.net/$logs/blob/2011/06/27/0600-000003 Start=Monday, June 27, 2011 6:52:23 AM End=Monday, June 27, 2011 6:58:33 AM Types=READ.
Ignoring Log https://gameusnorth.blob.core.windows.net/$logs/blob/2011/06/27/0700-000000 Start=Monday, June 27, 2011 7:00:44 AM End=Monday, June 27, 2011 7:05:31 AM Types=READ.
Ignoring Log https://gameusnorth.blob.core.windows.net/$logs/blob/2011/06/27/0700-000001 Start=Monday, June 27, 2011 7:06:52 AM End=Monday, June 27, 2011 7:12:56 AM Types=READ.
Ignoring Log https://gameusnorth.blob.core.windows.net/$logs/blob/2011/06/27/0700-000002 Start=Monday, June 27, 2011 7:21:13 AM End=Monday, June 27, 2011 7:26:21 AM Types=READ,WRITE.

Log entries matching 'DeleteContainer' are:

1.0;2011-06-27T03:00:40.4636789Z;DeleteContainer;Success;202;18;18;authenticated;gameusnorth;gameusnorth;blob;"https://gameusnorth.blob.core.windows.net/photos?restype=container&amp;timeout=30000";"/gameusnorth /photos";e09a61de-e47b-40aa-86e0-05fe620f818f;0;268.20.203.21;2009-09-19;364;0;125;0;0;;;;;;;;"ClientID f38d713b-7113-4fea-9173-9e9b00b22f71"


End searching for 'DeleteContainer'
Press any key to continue . . .

For more information, please see the MSDN Documentation.

Jai Haridas, Monilee Atkinson, and Brad Calder

Comments

  • Anonymous
    August 13, 2011
    Great feature, and great writeup and sample code to go with it. This blog entry will definitely save me some effort, and I really appreciate it. One question (that I'm guessing I already know the answer to): what happens to logging/metrics when I'm serving a blob through Windows Azure CDN, where Blob Storage only gets hit once-per-TTL? And, if the answer is "yeah, we're not logging CDN hits" then you can guess what my next formal feature request is going to be. :-)  (Not that I'm not incredibly grateful for what you've done here already.) In effect, the logging/metrics feature seems so good that it almost acts as a disincentive to use the CDN, when you're looking to track hits. Thanks! Scott Seattle, WA, USA

  • Anonymous
    August 16, 2011
    @Scott When the Azure CDN access your storage account (e.g., when the TTL expires), that request will be logged inside of the storage account.  The hits at the Azure CDN edges are not logged as part of the storage account (only the requests that directly hit the storage account).   So you won’t see logging/metrics for Azure CDN hits as part of the storage account’s logging/metrics.   Yes, the Azure CDN team is looking at providing logging for requests coming into Azure CDN in the future. Brad

  • Anonymous
    December 14, 2011
    @Brad very usefull post.... I have one question, did Storage Logging affect the performance? Atul

  • Anonymous
    December 20, 2011
    @Atul, user's storage account is not impacted by the system writing logging data to the user's storage account.