AzureContentSafetyModerator class
An Azure OpenAI moderator that uses OpenAI's moderation API to review prompts and plans for safety.
- Extends
-
OpenAIModerator<TState>
Remarks
This moderation can be configured to review the input from the user, output from the model, or both.
Constructors
Azure |
Creates a new instance of the OpenAI based moderator. |
Properties
options |
Methods
review |
Reviews an incoming prompt for safety violations. |
review |
Reviews the SAY commands generated by the planner for safety violations. |
Constructor Details
AzureContentSafetyModerator<TState>(AzureOpenAIModeratorOptions)
Creates a new instance of the OpenAI based moderator.
new AzureContentSafetyModerator(options: AzureOpenAIModeratorOptions)
Parameters
- options
- AzureOpenAIModeratorOptions
Configuration options for the moderator.
Property Details
options
Method Details
reviewInput(TurnContext, TState)
Reviews an incoming prompt for safety violations.
function reviewInput(context: TurnContext, state: TState): Promise<undefined | Plan>
Parameters
- context
-
TurnContext
Context for the current turn of conversation.
- state
-
TState
Application state for the current turn of conversation.
Returns
Promise<undefined | Plan>
An undefined value to approve the prompt or a new plan to redirect to if not approved.
reviewOutput(TurnContext, TState, Plan)
Reviews the SAY commands generated by the planner for safety violations.
function reviewOutput(context: TurnContext, state: TState, plan: Plan): Promise<Plan>
Parameters
- context
-
TurnContext
Context for the current turn of conversation.
- state
-
TState
Application state for the current turn of conversation.
- plan
- Plan
Plan generated by the planner.
Returns
Promise<Plan>
The plan to execute. Either the current plan passed in for review or a new plan.