다음을 통해 공유


EventProcessor<TPartition>.OnProcessingEventBatchAsync Method

Definition

Performs the tasks needed to process a batch of events for a given partition as they are read from the Event Hubs service.

protected abstract System.Threading.Tasks.Task OnProcessingEventBatchAsync (System.Collections.Generic.IEnumerable<Azure.Messaging.EventHubs.EventData> events, TPartition partition, System.Threading.CancellationToken cancellationToken);
abstract member OnProcessingEventBatchAsync : seq<Azure.Messaging.EventHubs.EventData> * 'Partition * System.Threading.CancellationToken -> System.Threading.Tasks.Task
Protected MustOverride Function OnProcessingEventBatchAsync (events As IEnumerable(Of EventData), partition As TPartition, cancellationToken As CancellationToken) As Task

Parameters

events
IEnumerable<EventData>

The batch of events to be processed.

partition
TPartition

The context of the partition from which the events were read.

cancellationToken
CancellationToken

A CancellationToken instance to signal the request to cancel the processing. This is most likely to occur when the processor is shutting down.

Returns

Remarks

The number of events in the events batch may vary, with the batch containing between zero and maximum batch size that was specified when the processor was created. The actual number of events in a batch depends on the number events available in the processor's prefetch queue at the time when a read takes place.

When at least one event is available in the prefetch queue, they will be used to form the batch as close to the requested maximum batch size as possible without waiting for additional events from the Event Hub partition to be read. When no events are available in prefetch the processor will wait until at least one event is available or the requested MaximumWaitTime has elapsed, after which the batch will be dispatched for processing.

If MaximumWaitTime is null, the processor will continue trying to read from the Event Hub partition until a batch with at least one event could be formed and will not dispatch any empty batches to this method.

This method will be invoked concurrently, limited to one call per partition. The processor will await each invocation to ensure that the events from the same partition are processed in the order that they were read from the partition. No time limit is imposed on an invocation of this handler; the processor will wait indefinitely for execution to complete before dispatching another event for the associated partition. It is safe for implementations to perform long-running operations, retries, delays, and dead-lettering activities.

Should an exception occur within the code for this method, the event processor will allow it to propagate up the stack without attempting to handle it in any way. On most hosts, this will fault the task responsible for partition processing, causing it to be restarted from the last checkpoint. On some hosts, it may crash the process. Developers are strongly encouraged to take all exception scenarios into account and guard against them using try/catch blocks and other means as appropriate.

It is not recommended that the state of the processor be managed directly from within this method; requesting to start or stop the processor may result in a deadlock scenario, especially if using the synchronous form of the call.

Applies to