Freigeben über


Overview of the Semantic Logging Application Block

patterns & practices Developer Center

The Semantic Logging Application Block can help to minimize the development effort required to implement structured event logging in your applications, and reduce the chances of inconsistency and errors when writing code that conforms to modern practice for generating logs containing semantically useful typed information. Logs of this type make automated log parsing and monitoring much easier and more efficient.

This topic discusses the following:

  • What is semantic logging?
  • What does the Semantic Logging Application Block do?
  • How does the Semantic Logging Application Block work?
  • When should I use the Semantic Logging Application Block?
  • What are the limitations of the block?
  • Next steps

What is semantic logging?

Most applications and services will need to generate log messages in response to unexpected conditions and events such as an application exception, or failure to connect to a database. Logging is also useful for tracing application flow through components during the execution of an application use case or scenario, and for auditing operations. In addition, applications often need to write information locally and over a network, and it may be necessary to collate events from multiple sources into a single location.

Semantic logging refers specifically to the use of strongly typed events and the consistent structure of the log messages. For example, a log message may contain a mixture of text and numeric values, such as "Role SALES2 failed while processing order #36741 for 24 items with total value $185.46. Item BR37A with discount 20% was not found in table PRICES.” While the message is useful, it is very difficult to handle in automated monitoring systems or by using code.

When using a semantic logging approach, the event may still expose the formatted message but it will also include a payload containing the individual variables as typed values that match a pre-defined schema. When routed to a suitable destination, the event’s payload is written as discrete elements, which makes it much easier to parse, filter, and obtain useful information from the logs. Even when the payload elements need to be flattened as a string it is still possible to render the payload in a way that makes the events easy to consume.

What does the Semantic Logging Application Block do?

The Semantic Logging Application Block is a framework for capturing and manipulating events raised by applications, and storing the typed and structured information they contain in log files or other logging stores. It takes advantage of features of the .NET Framework (version 4.5 and above) and Event Tracing for Windows (ETW). ETW is a fast, lightweight, strongly typed, extensible logging system that is built into the Windows operating system.

In version 4.5 and higher of the .NET Framework, a base class named EventSource (in the System.Diagnostics.Tracing namespace) makes it easy to use the capabilities of ETW to create custom event sources that generate semantically structured log messages. The Semantic Logging Application Block consumes events raised by event sources, and provides features to help you sample, filter, correlate, format, and store these events in a wide range of target logging stores.

As an example of using a custom event source, your application might define an event named OrderDiscountProcessingFailed that accepts parameters for the role name, order number, item count, total value, product ID, discount, and database table name. The order processing code simply calls this method with the appropriate values for the parameters.

MyCompanyEventSource.Log.OrderDiscountProcessingFailed(request.RoleName,
      order.OrderNumber, order.ItemCount, order.TotalValue,
      orderItem.ProductId, orderItem.Discount, context.TableName);

The application code does not need to assemble the message, and it does not need to specify other details of the event such as the ID, severity level, or verbosity because the method in the event source defines these properties for this specific event. The event source method assembles the event, including all of the required information to match the pre-defined schema, and publishes it to any subscribed listeners.

In addition, the application code does not need to specify the logging destination for the event. This information is specified in the configuration of the Semantic Logging Application Block.

Using the Semantic Logging Application Block with custom event sources based on the EventSource class provides the following advantages:

  • You can abstract the process of creating events from the specific logging providers to custom event sources based on the EventSource class, and validate these event source classes as you develop your applications. The event sources contain definitions of each event the application will log and include information such as the event identifier, the severity level, verbosity, extra information, and payload structure. You benefit from a holistic view of all the application events because they are defined in a centralized location, and the event definitions can be refactored to make them consistent with each other.
  • You can implement a consistent approach for capturing and filtering events, and storing them in many different types of destination. These destinations can include a database, a text file, Microsoft Azure table storage, the console, and custom locations using the application block extension points.
  • You can more easily query and analyze log files using automation because the log messages are formatted and structured in a consistent manner. The SQL Database and Azure Table sinks preserve the structure of log messages; and the JSON and XML formatters used with sinks that generate text log files produce event logs that have a consistent structure.
  • You can capture, correlate, and consolidate log messages from multiple sources. For example, you can log events from multiple processes that are part of the same distributed application, or from multiple applications, and collect these events in a central location. You might also want to capture events generated by asynchronous tasks, or even implement interaction between applications such as automating activities in response to events that are recorded in a log file or a Microsoft Azure Table.
  • There is much less chance of making a coding error when writing log messages because the parameters of the methods you call are strongly typed. In addition, an event with a particular ID will always have the same verbosity, extra information, and payload structure.

How does the Semantic Logging Application Block work?

The Semantic Logging Application Block captures events generated by custom event source classes that extend the EventSource class. You create these classes to define the events your application can raise for logging. You then define the event sinks that specify the target destination(s) for events, such as a database or a text file, and—where appropriate—attach a log formatter to these sinks. This gives you full control over the routing and format of the logged information.

Note

One of the most important topics when using the Semantic Logging Application Block is to understand the EventSource class, and related concepts such as how to develop and use your own custom event source classes. The EventSource class is part of the .NET Framework, and using it is not specific to the Semantic Logging Application Block.

The process by which events messages are passed to the event sinks depends on whether you are using the block just to collect events within a single application, or to capture and log events from more than one application (including applications running in different locations on the network). The Semantic Logging Application Block can be used in two ways:

  • In-process. In this scenario, you just need to collect events from within a single application. The event sinks run in the same process as the application and subscribe to events exposed by a trace listener that is part of the application block.
  • Out-of-process. In this scenario, you want to maximize logging throughput and improve the reliability of logging should the application fail. Your custom event source running within the application writes events to the ETW infrastructure. However, your event sinks run within a separate logging application and subscribe to the events exposed by a trace event service, which is notified of events by ETW. Typically, you will use this approach in production applications, and for services that need to scale.

Figure 1 shows the two different scenarios for using the Semantic Logging Application Block, and the way that events pass from the application code to the destination log.

Figure 1 - The in-process and out-of-process scenarios for using the block

Figure 1 - The in-process and out-of-process scenarios for using the block

The schematic indicates the parts of the overall process that are implemented by the Semantic Logging Application Block for the in-process scenario, and by the Out-of-Process Host application that is available for use in out-of-process scenarios. The objects shown in the schematic are described in detail in the topic Overview of logging using the Semantic Logging Application Block.

Note

In the out-of-process scenario, both the application that generates log messages and the Out-of-Process Host application that collects the messages must run on the same physical computer. In a distributed application, you must install the Out-of-Process Host application on every computer. To collate log messages in this scenario, each instances of the Out-of-Process host application can write them to a suitable single destination such as a database, Azure table storage, or—if you create or obtain suitable custom extensions for the block—to other stores such as ElasticSearch and Splunk.

Using the Semantic Logging Application Block in-process is simple to set up, but requires code in the application to define the logging strategy and instantiate event collection. All of the event sinks provided with the application block, with the exception of the Console sink, expose asynchronous methods and use batching for network calls in order to minimize the impact of logging on your application.

Using the Semantic Logging Application Block out-of-process requires some configuration effort, but the configuration can be changed at runtime without requiring any changes to the application, or restarting the out-of-process event listener host. The out-of-process approach can also improve logging performance by passing the workload of handling log items to ETW and the operating system, and enable you to collate logs from more than one process. It also enhances the reliability of the logging solution because log messages are delivered to the ETW infrastructure in the operating system immediately, minimizing the risk of losing log messages if your line of business application fails.

For more information about when to use the in-process and the out-of-process approach, see the topic Overview of logging using the Semantic Logging Application Block.

When should I use the Semantic Logging Application Block?

The Semantic Logging Application Block is useful when:

  • You need an easy way to manipulate and store structured event messages that conform to a known schema, have well defined characteristics, and a payload that is easy to consume.
  • You need to log information to a database, an Azure table, a file, or any other destination for which you can create or obtain a custom event sink and log formatter.
  • You need to filter logging messages based on log level or keywords.
  • You need to use sampling to manage log messages, and be able to correlate log messages from different threads and tasks that relate to the same business process.
  • You want to use events that conform to ETW without needing to commit to using ETW directly, avoiding the learning curve that this imposes.

Note

The EventSource class in the .NET 4.5 framework is likely to increase the use of ETW by developers. The Semantic Logging Application Block enables you to benefit from the way that the EventSource class helps you create structured events without the need to learn how to work directly with ETW. However, using the Semantic Logging Application Block will help you gain familiarity with ETW and its capabilities.

What are the limitations of the block?

There are some situations where events may be lost. In the in-process scenario, events buffered by the sinks may be lost if the application crashes. In both the in-process and out-of-process scenarios, events may be lost during very heavy loads on the application. ETW is itself a lossy mechanism under heavy load. Dropping events in these scenarios is a conscious design factor that allows the application to continue to operate.

For more information about using the block in high load scenarios, see Performance considerations.

The Semantic Logging Application Block formatters do not encrypt logging information, and sink destinations store logging information as clear text. You can protect the data passing over the network by using SSL with the Azure Table sink and wire encryption with the SQL Database sink. However, attackers that can access an event listener destination can read the information. You should ensure that access to all logging destinations is restricted to prevent unauthorized access to sensitive information.

For more information, see Securing the Semantic Logging Application Block.

Next steps

This guide to the Semantic Logging Application Block is divided into sections that will help you to quickly get started using the block, and learn how you can get the most benefit from it. The guide contains the following sections:

Next Topic | Home | Community