AzureOpenAIModeratorOptions interface

Options for the OpenAI based moderator.

Extends

Properties

blocklistNames

Text blocklist Name. Only support following characters: 0-9 A-Z a-z - . _ ~. You could attach multiple lists name here.

breakByBlocklists
categories

Azure OpenAI Content Safety Categories. Each category is provided with a severity level threshold from 0 to 6. If the severity level of a category is greater than or equal to the threshold, the category is flagged.

haltOnBlocklistHit

When set to true, further analyses of harmful content will not be performed in cases where blocklists are hit. When set to false, all analyses of harmful content will be performed, whether or not blocklists are hit. Default value is false.

Inherited Properties

apiKey

OpenAI API key

apiVersion

Optional. Azure Content Safety API version.

endpoint

Optional. OpenAI endpoint.

model

Optional. OpenAI model to use.

moderate

Which parts of the conversation to moderate.

organization

Optional. OpenAI organization.

Property Details

blocklistNames

Text blocklist Name. Only support following characters: 0-9 A-Z a-z - . _ ~. You could attach multiple lists name here.

blocklistNames?: string[]

Property Value

string[]

breakByBlocklists

Warning

This API is now deprecated.

use haltOnBlocklistHit

When set to true, further analyses of harmful content will not be performed in cases where blocklists are hit. When set to false, all analyses of harmful content will be performed, whether or not blocklists are hit. Default value is false.

breakByBlocklists?: boolean

Property Value

boolean

categories

Azure OpenAI Content Safety Categories. Each category is provided with a severity level threshold from 0 to 6. If the severity level of a category is greater than or equal to the threshold, the category is flagged.

categories?: ContentSafetyHarmCategory[]

Property Value

ContentSafetyHarmCategory[]

haltOnBlocklistHit

When set to true, further analyses of harmful content will not be performed in cases where blocklists are hit. When set to false, all analyses of harmful content will be performed, whether or not blocklists are hit. Default value is false.

haltOnBlocklistHit?: boolean

Property Value

boolean

Inherited Property Details

apiKey

OpenAI API key

apiKey: string

Property Value

string

Inherited From OpenAIModeratorOptions.apiKey

apiVersion

Optional. Azure Content Safety API version.

apiVersion?: string

Property Value

string

Inherited From OpenAIModeratorOptions.apiVersion

endpoint

Optional. OpenAI endpoint.

endpoint?: string

Property Value

string

Inherited From OpenAIModeratorOptions.endpoint

model

Optional. OpenAI model to use.

model?: string

Property Value

string

Inherited From OpenAIModeratorOptions.model

moderate

Which parts of the conversation to moderate.

moderate: "input" | "output" | "both"

Property Value

"input" | "output" | "both"

Inherited From OpenAIModeratorOptions.moderate

organization

Optional. OpenAI organization.

organization?: string

Property Value

string

Inherited From OpenAIModeratorOptions.organization