IntentRecognizer Class
In addition to performing speech-to-text recognition, the IntentRecognizer extracts structured information about the intent of the speaker.
- Inheritance
-
IntentRecognizer
Constructor
IntentRecognizer(speech_config: SpeechConfig, audio_config: AudioConfig | None = None, intents: Iterable[Tuple[str | LanguageUnderstandingModel, str]] | None = None)
Parameters
Name | Description |
---|---|
speech_config
Required
|
The config for the speech recognizer. |
audio_config
|
The config for the audio input. Default value: None
|
intents
|
Intents from an iterable over pairs of (model, intent_id) or (simple_phrase, intent_id) to be recognized. Default value: None
|
Methods
add_all_intents |
Adds all intents from the specified Language Understanding Model. |
add_intent |
Add an intent to the recognizer. There are different ways to do this:
|
add_intents |
Add intents from an iterable over pairs of (model, intent_id) or (simple_phrase, intent_id). |
recognize_once |
Performs recognition in a blocking (synchronous) mode. Returns after a single utterance is recognized. The end of a single utterance is determined by listening for silence at the end or until a maximum of about 30 seconds of audio is processed. The task returns the recognition text as result. For long-running multi-utterance recognition, use start_continuous_recognition_async instead. |
recognize_once_async |
Performs recognition in a non-blocking (asynchronous) mode. This will recognize a single utterance. The end of a single utterance is determined by listening for silence at the end or until a maximum of about 30 seconds of audio is processed. For long-running multi-utterance recognition, use start_continuous_recognition_async instead. |
start_continuous_recognition |
Synchronously initiates continuous recognition operation. User has to connect to EventSignal to receive recognition results. Call stop_continuous_recognition_async to stop the recognition. |
start_continuous_recognition_async |
Asynchronously initiates continuous recognition operation. User has to connect to EventSignal to receive recognition results. Call stop_continuous_recognition_async to stop the recognition. |
start_keyword_recognition |
Synchronously configures the recognizer with the given keyword model. After calling this method, the recognizer is listening for the keyword to start the recognition. Call stop_keyword_recognition() to end the keyword initiated recognition. |
start_keyword_recognition_async |
Asynchronously configures the recognizer with the given keyword model. After calling this method, the recognizer is listening for the keyword to start the recognition. Call stop_keyword_recognition_async() to end the keyword initiated recognition. |
stop_continuous_recognition |
Synchronously terminates ongoing continuous recognition operation. |
stop_continuous_recognition_async |
Asynchronously terminates ongoing continuous recognition operation. |
stop_keyword_recognition |
Synchronously ends the keyword initiated recognition. |
stop_keyword_recognition_async |
Asynchronously ends the keyword initiated recognition. |
add_all_intents
Adds all intents from the specified Language Understanding Model.
add_all_intents(model: LanguageUnderstandingModel)
Parameters
Name | Description |
---|---|
model
Required
|
|
add_intent
Add an intent to the recognizer. There are different ways to do this:
add_intent(simple_phrase): Adds a simple phrase that may be spoken by the user, indicating a specific user intent.
add_intent(simple_phrase, intent_id): Adds a simple phrase that may be spoken by the user, indicating a specific user intent. Once recognized, the result's intent id will match the id supplied here.
add_intent(model, intent_name): Adds a single intent by name from the specified LanguageUnderstandingModel.
add_intent(model, intent_name, intent_id): Adds a single intent by name from the specified LanguageUnderstandingModel.
add_intent(trigger, intent_id): Adds the IntentTrigger specified. IntentTrigger.
add_intent(*args)
Parameters
Name | Description |
---|---|
model
Required
|
The language understanding model containing the intent. |
intent_name
Required
|
The name of the single intent to be included from the language understanding model. |
simple_phrase
Required
|
The phrase corresponding to the intent. |
intent_id
Required
|
A custom id string to be returned in the IntentRecognitionResult's intent_id property. |
trigger
Required
|
The IntentTrigger corresponding to the intent. |
add_intents
Add intents from an iterable over pairs of (model, intent_id) or (simple_phrase, intent_id).
add_intents(intents_iter: Iterable[Tuple[str | LanguageUnderstandingModel, str]])
Parameters
Name | Description |
---|---|
intents
Required
|
Intents from an iterable over pairs of (model, intent_id) or (simple_phrase, intent_id) to be recognized. |
intents_iter
Required
|
|
recognize_once
Performs recognition in a blocking (synchronous) mode. Returns after a single utterance is recognized. The end of a single utterance is determined by listening for silence at the end or until a maximum of about 30 seconds of audio is processed. The task returns the recognition text as result. For long-running multi-utterance recognition, use start_continuous_recognition_async instead.
recognize_once() -> IntentRecognitionResult
Returns
Type | Description |
---|---|
The result value of the synchronous recognition. |
recognize_once_async
Performs recognition in a non-blocking (asynchronous) mode. This will recognize a single utterance. The end of a single utterance is determined by listening for silence at the end or until a maximum of about 30 seconds of audio is processed. For long-running multi-utterance recognition, use start_continuous_recognition_async instead.
recognize_once_async() -> ResultFuture
Returns
Type | Description |
---|---|
A future containing the result value of the asynchronous recognition. |
start_continuous_recognition
Synchronously initiates continuous recognition operation. User has to connect to EventSignal to receive recognition results. Call stop_continuous_recognition_async to stop the recognition.
start_continuous_recognition()
start_continuous_recognition_async
Asynchronously initiates continuous recognition operation. User has to connect to EventSignal to receive recognition results. Call stop_continuous_recognition_async to stop the recognition.
start_continuous_recognition_async() -> ResultFuture
Returns
Type | Description |
---|---|
A future that is fulfilled once recognition has been initialized. |
start_keyword_recognition
Synchronously configures the recognizer with the given keyword model. After calling this method, the recognizer is listening for the keyword to start the recognition. Call stop_keyword_recognition() to end the keyword initiated recognition.
start_keyword_recognition(model: KeywordRecognitionModel)
Parameters
Name | Description |
---|---|
model
Required
|
the keyword recognition model that specifies the keyword to be recognized. |
start_keyword_recognition_async
Asynchronously configures the recognizer with the given keyword model. After calling this method, the recognizer is listening for the keyword to start the recognition. Call stop_keyword_recognition_async() to end the keyword initiated recognition.
start_keyword_recognition_async(model: KeywordRecognitionModel)
Parameters
Name | Description |
---|---|
model
Required
|
the keyword recognition model that specifies the keyword to be recognized. |
Returns
Type | Description |
---|---|
A future that is fulfilled once recognition has been initialized. |
stop_continuous_recognition
Synchronously terminates ongoing continuous recognition operation.
stop_continuous_recognition()
stop_continuous_recognition_async
Asynchronously terminates ongoing continuous recognition operation.
stop_continuous_recognition_async()
Returns
Type | Description |
---|---|
A future that is fulfilled once recognition has been stopped. |
stop_keyword_recognition
Synchronously ends the keyword initiated recognition.
stop_keyword_recognition()
stop_keyword_recognition_async
Asynchronously ends the keyword initiated recognition.
stop_keyword_recognition_async()
Returns
Type | Description |
---|---|
A future that is fulfilled once recognition has been stopped. |
Attributes
authorization_token
The authorization token that will be used for connecting to the service.
Note
The caller needs to ensure that the authorization token is valid. Before the
authorization token expires, the caller needs to refresh it by calling this setter with a
new valid token. Otherwise, the recognizer will encounter errors during recognition.
canceled
Signal for events containing canceled recognition results (indicating a recognition attempt that was canceled as a result or a direct cancellation request or, alternatively, a transport or protocol failure).
Callbacks connected to this signal are called with a IntentRecognitionCanceledEventArgs, instance as the single argument.
endpoint_id
The endpoint ID of a customized speech model that is used for recognition, or a custom voice model for speech synthesis.
properties
A collection of properties and their values defined for this Recognizer.
recognized
Signal for events containing final recognition results (indicating a successful recognition attempt).
Callbacks connected to this signal are called with a IntentRecognitionEventArgs instance as the single argument, dependent on the type of recognizer.
recognizing
Signal for events containing intermediate recognition results.
Callbacks connected to this signal are called with a IntentRecognitionEventArgs instance as the single argument.
session_started
Signal for events indicating the start of a recognition session (operation).
Callbacks connected to this signal are called with a SessionEventArgs instance as the single argument.
session_stopped
Signal for events indicating the end of a recognition session (operation).
Callbacks connected to this signal are called with a SessionEventArgs instance as the single argument.
speech_end_detected
Signal for events indicating the end of speech.
Callbacks connected to this signal are called with a RecognitionEventArgs instance as the single argument.
speech_start_detected
Signal for events indicating the start of speech.
Callbacks connected to this signal are called with a RecognitionEventArgs instance as the single argument.
Azure SDK for Python