SpeechServiceConnection_Key |
The Cognitive Services Speech Service subscription key. If you are using an intent recognizer, you need to specify the LUIS endpoint key for your particular LUIS app. Under normal circumstances, you shouldn't have to use this property directly. Instead, use SpeechConfig::FromSubscription. |
SpeechServiceConnection_Endpoint |
The Cognitive Services Speech Service endpoint (url). Under normal circumstances, you shouldn't have to use this property directly. Instead, use SpeechConfig::FromEndpoint. NOTE: This endpoint is not the same as the endpoint used to obtain an access token. |
SpeechServiceConnection_Region |
The Cognitive Services Speech Service region. Under normal circumstances, you shouldn't have to use this property directly. Instead, use SpeechConfig::FromSubscription, SpeechConfig::FromEndpoint, SpeechConfig::FromHost, SpeechConfig::FromAuthorizationToken. |
SpeechServiceAuthorization_Token |
The Cognitive Services Speech Service authorization token (aka access token). Under normal circumstances, you shouldn't have to use this property directly. Instead, use SpeechConfig::FromAuthorizationToken, SpeechRecognizer::SetAuthorizationToken, IntentRecognizer::SetAuthorizationToken, TranslationRecognizer::SetAuthorizationToken. |
SpeechServiceAuthorization_Type |
The Cognitive Services Speech Service authorization type. Currently unused. |
SpeechServiceConnection_EndpointId |
The Cognitive Services Custom Speech or Custom Voice Service endpoint id. Under normal circumstances, you shouldn't have to use this property directly. Instead use SpeechConfig::SetEndpointId. NOTE: The endpoint id is available in the Custom Speech Portal, listed under Endpoint Details. |
SpeechServiceConnection_Host |
The Cognitive Services Speech Service host (url). Under normal circumstances, you shouldn't have to use this property directly. Instead, use SpeechConfig::FromHost. |
SpeechServiceConnection_ProxyHostName |
The host name of the proxy server used to connect to the Cognitive Services Speech Service. Under normal circumstances, you shouldn't have to use this property directly. Instead, use SpeechConfig::SetProxy. NOTE: This property id was added in version 1.1.0. |
SpeechServiceConnection_ProxyPort |
The port of the proxy server used to connect to the Cognitive Services Speech Service. Under normal circumstances, you shouldn't have to use this property directly. Instead, use SpeechConfig::SetProxy. NOTE: This property id was added in version 1.1.0. |
SpeechServiceConnection_ProxyUserName |
The user name of the proxy server used to connect to the Cognitive Services Speech Service. Under normal circumstances, you shouldn't have to use this property directly. Instead, use SpeechConfig::SetProxy. NOTE: This property id was added in version 1.1.0. |
SpeechServiceConnection_ProxyPassword |
The password of the proxy server used to connect to the Cognitive Services Speech Service. Under normal circumstances, you shouldn't have to use this property directly. Instead, use SpeechConfig::SetProxy. NOTE: This property id was added in version 1.1.0. |
SpeechServiceConnection_Url |
The URL string built from speech configuration. This property is intended to be read-only. The SDK is using it internally. NOTE: Added in version 1.5.0. |
SpeechServiceConnection_ProxyHostBypass |
Specifies the list of hosts for which proxies should not be used. This setting overrides all other configurations. Hostnames are separated by commas and are matched in a case-insensitive manner. Wildcards are not supported. |
SpeechServiceConnection_TranslationToLanguages |
The list of comma separated languages used as target translation languages. Under normal circumstances, you shouldn't have to use this property directly. Instead use SpeechTranslationConfig::AddTargetLanguage and SpeechTranslationConfig::GetTargetLanguages. |
SpeechServiceConnection_TranslationVoice |
The name of the Cognitive Service Text to Speech Service voice. Under normal circumstances, you shouldn't have to use this property directly. Instead use SpeechTranslationConfig::SetVoiceName. NOTE: Valid voice names can be found here. |
SpeechServiceConnection_TranslationFeatures |
Translation features. For internal use. |
SpeechServiceConnection_IntentRegion |
The Language Understanding Service region. Under normal circumstances, you shouldn't have to use this property directly. Instead use LanguageUnderstandingModel. |
SpeechServiceConnection_RecoMode |
The Cognitive Services Speech Service recognition mode. Can be "INTERACTIVE", "CONVERSATION", "DICTATION". This property is intended to be read-only. The SDK is using it internally. |
SpeechServiceConnection_RecoLanguage |
The spoken language to be recognized (in BCP-47 format). Under normal circumstances, you shouldn't have to use this property directly. Instead, use SpeechConfig::SetSpeechRecognitionLanguage. |
Speech_SessionId |
The session id. This id is a universally unique identifier (aka UUID) representing a specific binding of an audio input stream and the underlying speech recognition instance to which it is bound. Under normal circumstances, you shouldn't have to use this property directly. Instead use SessionEventArgs::SessionId. |
SpeechServiceConnection_UserDefinedQueryParameters |
The query parameters provided by users. They will be passed to service as URL query parameters. Added in version 1.5.0. |
SpeechServiceConnection_RecoBackend |
The string to specify the backend to be used for speech recognition; allowed options are online and offline. Under normal circumstances, you shouldn't use this property directly. Currently the offline option is only valid when EmbeddedSpeechConfig is used. Added in version 1.19.0. |
SpeechServiceConnection_RecoModelName |
The name of the model to be used for speech recognition. Under normal circumstances, you shouldn't use this property directly. Currently this is only valid when EmbeddedSpeechConfig is used. Added in version 1.19.0. |
SpeechServiceConnection_RecoModelKey |
This property is deprecated. |
SpeechServiceConnection_RecoModelIniFile |
The path to the ini file of the model to be used for speech recognition. Under normal circumstances, you shouldn't use this property directly. Currently this is only valid when EmbeddedSpeechConfig is used. Added in version 1.19.0. |
SpeechServiceConnection_SynthLanguage |
The spoken language to be synthesized (e.g. en-US) Added in version 1.4.0. |
SpeechServiceConnection_SynthVoice |
The name of the TTS voice to be used for speech synthesis Added in version 1.4.0. |
SpeechServiceConnection_SynthOutputFormat |
The string to specify TTS output audio format Added in version 1.4.0. |
SpeechServiceConnection_SynthEnableCompressedAudioTransmission |
Indicates if use compressed audio format for speech synthesis audio transmission. This property only affects when SpeechServiceConnection_SynthOutputFormat is set to a pcm format. If this property is not set and GStreamer is available, SDK will use compressed format for synthesized audio transmission, and decode it. You can set this property to "false" to use raw pcm format for transmission on wire. Added in version 1.16.0. |
SpeechServiceConnection_SynthBackend |
The string to specify TTS backend; valid options are online and offline. Under normal circumstances, you shouldn't have to use this property directly. Instead, use EmbeddedSpeechConfig::FromPath or EmbeddedSpeechConfig::FromPaths to set the synthesis backend to offline. Added in version 1.19.0. |
SpeechServiceConnection_SynthOfflineDataPath |
The data file path(s) for offline synthesis engine; only valid when synthesis backend is offline. Under normal circumstances, you shouldn't have to use this property directly. Instead, use EmbeddedSpeechConfig::FromPath or EmbeddedSpeechConfig::FromPaths. Added in version 1.19.0. |
SpeechServiceConnection_SynthOfflineVoice |
The name of the offline TTS voice to be used for speech synthesis Under normal circumstances, you shouldn't use this property directly. Instead, use EmbeddedSpeechConfig::SetSpeechSynthesisVoice and EmbeddedSpeechConfig::GetSpeechSynthesisVoiceName. Added in version 1.19.0. |
SpeechServiceConnection_SynthModelKey |
This property is deprecated. |
SpeechServiceConnection_VoicesListEndpoint |
The Cognitive Services Speech Service voices list api endpoint (url). Under normal circumstances, you don't need to specify this property, SDK will construct it based on the region/host/endpoint of SpeechConfig. Added in version 1.16.0. |
SpeechServiceConnection_InitialSilenceTimeoutMs |
The initial silence timeout value (in milliseconds) used by the service. Added in version 1.5.0. |
SpeechServiceConnection_EndSilenceTimeoutMs |
The end silence timeout value (in milliseconds) used by the service. Added in version 1.5.0. |
SpeechServiceConnection_EnableAudioLogging |
A boolean value specifying whether audio logging is enabled in the service or not. Audio and content logs are stored either in Microsoft-owned storage, or in your own storage account linked to your Cognitive Services subscription (Bring Your Own Storage (BYOS) enabled Speech resource). Added in version 1.5.0. |
SpeechServiceConnection_LanguageIdMode |
The speech service connection language identifier mode. Can be "AtStart" (the default), or "Continuous". See Language Identification document. Added in 1.25.0. |
SpeechServiceConnection_TranslationCategoryId |
The speech service connection translation categoryId. |
SpeechServiceConnection_AutoDetectSourceLanguages |
The auto detect source languages Added in version 1.8.0. |
SpeechServiceConnection_AutoDetectSourceLanguageResult |
The auto detect source language result Added in version 1.8.0. |
SpeechServiceResponse_RequestDetailedResultTrueFalse |
The requested Cognitive Services Speech Service response output format (simple or detailed). Under normal circumstances, you shouldn't have to use this property directly. Instead use SpeechConfig::SetOutputFormat. |
SpeechServiceResponse_RequestProfanityFilterTrueFalse |
The requested Cognitive Services Speech Service response output profanity level. Currently unused. |
SpeechServiceResponse_ProfanityOption |
The requested Cognitive Services Speech Service response output profanity setting. Allowed values are "masked", "removed", and "raw". Added in version 1.5.0. |
SpeechServiceResponse_PostProcessingOption |
A string value specifying which post processing option should be used by service. Allowed values are "TrueText". Added in version 1.5.0. |
SpeechServiceResponse_RequestWordLevelTimestamps |
A boolean value specifying whether to include word-level timestamps in the response result. Added in version 1.5.0. |
SpeechServiceResponse_StablePartialResultThreshold |
The number of times a word has to be in partial results to be returned. Added in version 1.5.0. |
SpeechServiceResponse_OutputFormatOption |
A string value specifying the output format option in the response result. Internal use only. Added in version 1.5.0. |
SpeechServiceResponse_RequestSnr |
A boolean value specifying whether to include SNR (signal to noise ratio) in the response result. Added in version 1.18.0. |
SpeechServiceResponse_TranslationRequestStablePartialResult |
A boolean value to request for stabilizing translation partial results by omitting words in the end. Added in version 1.5.0. |
SpeechServiceResponse_RequestWordBoundary |
A boolean value specifying whether to request WordBoundary events. Added in version 1.21.0. |
SpeechServiceResponse_RequestPunctuationBoundary |
A boolean value specifying whether to request punctuation boundary in WordBoundary Events. Default is true. Added in version 1.21.0. |
SpeechServiceResponse_RequestSentenceBoundary |
A boolean value specifying whether to request sentence boundary in WordBoundary Events. Default is false. Added in version 1.21.0. |
SpeechServiceResponse_SynthesisEventsSyncToAudio |
A boolean value specifying whether the SDK should synchronize synthesis metadata events, (e.g. word boundary, viseme, etc.) to the audio playback. This only takes effect when the audio is played through the SDK. Default is true. If set to false, the SDK will fire the events as they come from the service, which may be out of sync with the audio playback. Added in version 1.31.0. |
SpeechServiceResponse_JsonResult |
The Cognitive Services Speech Service response output (in JSON format). This property is available on recognition result objects only. |
SpeechServiceResponse_JsonErrorDetails |
The Cognitive Services Speech Service error details (in JSON format). Under normal circumstances, you shouldn't have to use this property directly. Instead, use CancellationDetails::ErrorDetails. |
SpeechServiceResponse_RecognitionLatencyMs |
The recognition latency in milliseconds. Read-only, available on final speech/translation/intent results. This measures the latency between when an audio input is received by the SDK, and the moment the final result is received from the service. The SDK computes the time difference between the last audio fragment from the audio input that is contributing to the final result, and the time the final result is received from the speech service. Added in version 1.3.0. |
SpeechServiceResponse_RecognitionBackend |
The recognition backend. Read-only, available on speech recognition results. This indicates whether cloud (online) or embedded (offline) recognition was used to produce the result. |
SpeechServiceResponse_SynthesisFirstByteLatencyMs |
The speech synthesis first byte latency in milliseconds. Read-only, available on final speech synthesis results. This measures the latency between when the synthesis is started to be processed, and the moment the first byte audio is available. Added in version 1.17.0. |
SpeechServiceResponse_SynthesisFinishLatencyMs |
The speech synthesis all bytes latency in milliseconds. Read-only, available on final speech synthesis results. This measures the latency between when the synthesis is started to be processed, and the moment the whole audio is synthesized. Added in version 1.17.0. |
SpeechServiceResponse_SynthesisUnderrunTimeMs |
The underrun time for speech synthesis in milliseconds. Read-only, available on results in SynthesisCompleted events. This measures the total underrun time from PropertyId::AudioConfig_PlaybackBufferLengthInMs is filled to synthesis completed. Added in version 1.17.0. |
SpeechServiceResponse_SynthesisConnectionLatencyMs |
The speech synthesis connection latency in milliseconds. Read-only, available on final speech synthesis results. This measures the latency between when the synthesis is started to be processed, and the moment the HTTP/WebSocket connection is established. Added in version 1.26.0. |
SpeechServiceResponse_SynthesisNetworkLatencyMs |
The speech synthesis network latency in milliseconds. Read-only, available on final speech synthesis results. This measures the network round trip time. Added in version 1.26.0. |
SpeechServiceResponse_SynthesisServiceLatencyMs |
The speech synthesis service latency in milliseconds. Read-only, available on final speech synthesis results. This measures the service processing time to synthesize the first byte of audio. Added in version 1.26.0. |
SpeechServiceResponse_SynthesisBackend |
Indicates which backend the synthesis is finished by. Read-only, available on speech synthesis results, except for the result in SynthesisStarted event Added in version 1.17.0. |
SpeechServiceResponse_DiarizeIntermediateResults |
Determines if intermediate results contain speaker identification. |
CancellationDetails_Reason |
The cancellation reason. Currently unused. |
CancellationDetails_ReasonText |
The cancellation text. Currently unused. |
CancellationDetails_ReasonDetailedText |
The cancellation detailed text. Currently unused. |
LanguageUnderstandingServiceResponse_JsonResult |
The Language Understanding Service response output (in JSON format). Available via IntentRecognitionResult.Properties. |
AudioConfig_DeviceNameForCapture |
The device name for audio capture. Under normal circumstances, you shouldn't have to use this property directly. Instead, use AudioConfig::FromMicrophoneInput. NOTE: This property id was added in version 1.3.0. |
AudioConfig_NumberOfChannelsForCapture |
The number of channels for audio capture. Internal use only. NOTE: This property id was added in version 1.3.0. |
AudioConfig_SampleRateForCapture |
The sample rate (in Hz) for audio capture. Internal use only. NOTE: This property id was added in version 1.3.0. |
AudioConfig_BitsPerSampleForCapture |
The number of bits of each sample for audio capture. Internal use only. NOTE: This property id was added in version 1.3.0. |
AudioConfig_AudioSource |
The audio source. Allowed values are "Microphones", "File", and "Stream". Added in version 1.3.0. |
AudioConfig_DeviceNameForRender |
The device name for audio render. Under normal circumstances, you shouldn't have to use this property directly. Instead, use AudioConfig::FromSpeakerOutput. Added in version 1.14.0. |
AudioConfig_PlaybackBufferLengthInMs |
Playback buffer length in milliseconds, default is 50 milliseconds. |
AudioConfig_AudioProcessingOptions |
Audio processing options in JSON format. |
Speech_LogFilename |
The file name to write logs. Added in version 1.4.0. |
Speech_SegmentationSilenceTimeoutMs |
A duration of detected silence, measured in milliseconds, after which speech-to-text will determine a spoken phrase has ended and generate a final Recognized result. Configuring this timeout may be helpful in situations where spoken input is significantly faster or slower than usual and default segmentation behavior consistently yields results that are too long or too short. Segmentation timeout values that are inappropriately high or low can negatively affect speech-to-text accuracy; this property should be carefully configured and the resulting behavior should be thoroughly validated as intended. |
Speech_SegmentationMaximumTimeMs |
The maximum length of a spoken phrase when using the "Time" segmentation strategy. As the length of a spoken phrase approaches this value, the Speech_SegmentationSilenceTimeoutMs will begin being reduced until either the phrase silence timeout is hit or the phrase reaches the maximum length. |
Speech_SegmentationStrategy |
The strategy used to determine when a spoken phrase has ended and a final Recognized result should be generated. Allowed values are "Default", "Time", and "Semantic". |
Conversation_ApplicationId |
Identifier used to connect to the backend service. Added in version 1.5.0. |
Conversation_DialogType |
Type of dialog backend to connect to. Added in version 1.7.0. |
Conversation_Initial_Silence_Timeout |
Silence timeout for listening Added in version 1.5.0. |
Conversation_From_Id |
From id to be used on speech recognition activities Added in version 1.5.0. |
Conversation_Conversation_Id |
ConversationId for the session. Added in version 1.8.0. |
Conversation_Custom_Voice_Deployment_Ids |
Comma separated list of custom voice deployment ids. Added in version 1.8.0. |
Conversation_Speech_Activity_Template |
Speech activity template, stamp properties in the template on the activity generated by the service for speech. Added in version 1.10.0. |
Conversation_ParticipantId |
Your participant identifier in the current conversation. Added in version 1.13.0. |
Conversation_Request_Bot_Status_Messages |
|
Conversation_Connection_Id |
|
DataBuffer_TimeStamp |
The time stamp associated to data buffer written by client when using Pull/Push audio input streams. The time stamp is a 64-bit value with a resolution of 90 kHz. It is the same as the presentation timestamp in an MPEG transport stream. See https://en.wikipedia.org/wiki/Presentation_timestamp Added in version 1.5.0. |
DataBuffer_UserId |
The user id associated to data buffer written by client when using Pull/Push audio input streams. Added in version 1.5.0. |
PronunciationAssessment_ReferenceText |
The reference text of the audio for pronunciation evaluation. For this and the following pronunciation assessment parameters, see the table Pronunciation assessment parameters. Under normal circumstances, you shouldn't have to use this property directly. Instead, use PronunciationAssessmentConfig::Create or PronunciationAssessmentConfig::SetReferenceText. Added in version 1.14.0. |
PronunciationAssessment_GradingSystem |
The point system for pronunciation score calibration (FivePoint or HundredMark). Under normal circumstances, you shouldn't have to use this property directly. Instead, use PronunciationAssessmentConfig::Create. Added in version 1.14.0. |
PronunciationAssessment_Granularity |
The pronunciation evaluation granularity (Phoneme, Word, or FullText). Under normal circumstances, you shouldn't have to use this property directly. Instead, use PronunciationAssessmentConfig::Create. Added in version 1.14.0. |
PronunciationAssessment_EnableMiscue |
Defines if enable miscue calculation. With this enabled, the pronounced words will be compared to the reference text, and will be marked with omission/insertion based on the comparison. The default setting is False. Under normal circumstances, you shouldn't have to use this property directly. Instead, use PronunciationAssessmentConfig::Create. Added in version 1.14.0. |
PronunciationAssessment_PhonemeAlphabet |
The pronunciation evaluation phoneme alphabet. The valid values are "SAPI" (default) and "IPA" Under normal circumstances, you shouldn't have to use this property directly. Instead, use PronunciationAssessmentConfig::SetPhonemeAlphabet. Added in version 1.20.0. |
PronunciationAssessment_NBestPhonemeCount |
The pronunciation evaluation nbest phoneme count. Under normal circumstances, you shouldn't have to use this property directly. Instead, use PronunciationAssessmentConfig::SetNBestPhonemeCount. Added in version 1.20.0. |
PronunciationAssessment_EnableProsodyAssessment |
Whether to enable prosody assessment. Under normal circumstances, you shouldn't have to use this property directly. Instead, use PronunciationAssessmentConfig::EnableProsodyAssessment. Added in version 1.33.0. |
PronunciationAssessment_Json |
The json string of pronunciation assessment parameters Under normal circumstances, you shouldn't have to use this property directly. Instead, use PronunciationAssessmentConfig::Create. Added in version 1.14.0. |
PronunciationAssessment_Params |
Pronunciation assessment parameters. This property is intended to be read-only. The SDK is using it internally. Added in version 1.14.0. |
PronunciationAssessment_ContentTopic |
The content topic of the pronunciation assessment. Under normal circumstances, you shouldn't have to use this property directly. Instead, use PronunciationAssessmentConfig::EnableContentAssessmentWithTopic. Added in version 1.33.0. |
SpeakerRecognition_Api_Version |
Speaker Recognition backend API version. This property is added to allow testing and use of previous versions of Speaker Recognition APIs, where applicable. Added in version 1.18.0. |
SpeechTranslation_ModelName |
The name of a model to be used for speech translation. Do not use this property directly. Currently this is only valid when EmbeddedSpeechConfig is used. |
SpeechTranslation_ModelKey |
This property is deprecated. |
KeywordRecognition_ModelName |
The name of a model to be used for keyword recognition. Do not use this property directly. Currently this is only valid when EmbeddedSpeechConfig is used. |
KeywordRecognition_ModelKey |
This property is deprecated. |
EmbeddedSpeech_EnablePerformanceMetrics |
Enable the collection of embedded speech performance metrics which can be used to evaluate the capability of a device to use embedded speech. The collected data is included in results from specific scenarios like speech recognition. The default setting is "false". Note that metrics may not be available from all embedded speech scenarios. |
SpeechSynthesisRequest_Pitch |
The pitch of the synthesized speech. |
SpeechSynthesisRequest_Rate |
The rate of the synthesized speech. |
SpeechSynthesisRequest_Volume |
The volume of the synthesized speech. |