Partager via


Note

Please see Azure Cognitive Services for Speech documentation for the latest supported speech solutions.

How to specify the speech recognizer language

Learn how to select an installed language to use for speech recognition.

Here, we enumerate the languages installed on a system, identify which is the default language, and select a different language for recognition.

What you need to know

Technologies

Prerequisites

This topic builds on Quickstart: Speech recognition. You should have a basic understanding of speech recognition and recognition constraints.

To complete this tutorial, have a look through these topics to get familiar with the technologies discussed here:

Instructions

Step 1: Identify the default language

A speech recognizer uses the system speech language as its default recognition language. This language is set by the user on the device Settings > System > Speech > Speech Language screen.

We identify the default language by checking the SystemSpeechLanguage static property.

var language = SpeechRecognizer.SystemSpeechLanguage; 

Step 2: Confirm an installed language

Installed languages can vary between devices. You should verify the existence of a language if you depend on it for a particular constraint.

Note  A reboot is required after a new language pack is installed. An exception with error code SPERR_NOT_FOUND (0x8004503a) is raised if the specified language is not supported or has not finished installing.

 

Determine the supported languages on a device by checking one of two static properties of the SpeechRecognizer class:

Step 3: Specify a language

To specify a language, pass a Language object in the SpeechRecognizer constructor.

Here, we specify "en-US" as the recognition language.

var language = new Windows.Globalization.Language(“en-US”); 
var recognizer = new SpeechRecognizer(language); 

Remarks

A topic constraint can be configured by adding a SpeechRecognitionTopicConstraint to the Constraints collection of the SpeechRecognizer and then calling CompileConstraintsAsync. A SpeechRecognitionResultStatus of TopicLanguageNotSupported is returned if the recognizer is not initialized with a supported topic language.

A list constraint is configured by adding a SpeechRecognitionListConstraint to the Constraints collection of the SpeechRecognizer and then calling CompileConstraintsAsync. You cannot specify the language of a custom list directly. Instead the list will be processed using the language of the recognizer.

An SRGS grammar is an open-standard XML format represented by the SpeechRecognitionGrammarFileConstraint class. Unlike custom lists, you can specify the language of the grammar in the SRGS markup. CompileConstraintsAsync fails with a SpeechRecognitionResultStatus of TopicLanguageNotSupported if the recognizer is not initialized to the same language as the SRGS markup.

Responding to speech interactions

Designers

Speech design guidelines