Share via


OpenAIPromptExecutionSettings Class

Definition

Execution settings for an OpenAI completion request.

[System.Text.Json.Serialization.JsonNumberHandling(System.Text.Json.Serialization.JsonNumberHandling.AllowReadingFromString)]
public class OpenAIPromptExecutionSettings : Microsoft.SemanticKernel.PromptExecutionSettings
[<System.Text.Json.Serialization.JsonNumberHandling(System.Text.Json.Serialization.JsonNumberHandling.AllowReadingFromString)>]
type OpenAIPromptExecutionSettings = class
    inherit PromptExecutionSettings
Public Class OpenAIPromptExecutionSettings
Inherits PromptExecutionSettings
Inheritance
OpenAIPromptExecutionSettings
Derived
Attributes

Constructors

OpenAIPromptExecutionSettings()

Properties

ChatSystemPrompt

The system prompt to use when generating text using a chat model. Defaults to "Assistant is a large language model."

ExtensionData

Extra properties that may be included in the serialized execution settings.

(Inherited from PromptExecutionSettings)
FrequencyPenalty

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.

FunctionChoiceBehavior

Gets or sets the behavior defining the way functions are chosen by LLM and how they are invoked by AI connectors.

(Inherited from PromptExecutionSettings)
IsFrozen

Gets a value that indicates whether the PromptExecutionSettings are currently modifiable.

(Inherited from PromptExecutionSettings)
Logprobs

Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content of message.

MaxTokens

The maximum number of tokens to generate in the completion.

ModelId

Model identifier. This identifies the AI model these settings are configured for e.g., gpt-4, gpt-3.5-turbo

(Inherited from PromptExecutionSettings)
PresencePenalty

Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.

ResponseFormat

Gets or sets the response format to use for the completion.

Seed

If specified, the system will make a best effort to sample deterministically such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed.

ServiceId

Service identifier. This identifies the service these settings are configured for e.g., azure_openai_eastus, openai, ollama, huggingface, etc.

(Inherited from PromptExecutionSettings)
StopSequences

Sequences where the completion will stop generating further tokens.

Temperature

Temperature controls the randomness of the completion. The higher the temperature, the more random the completion. Default is 1.0.

TokenSelectionBiases

Modify the likelihood of specified tokens appearing in the completion.

ToolCallBehavior

Gets or sets the behavior for how tool calls are handled.

TopLogprobs

An integer specifying the number of most likely tokens to return at each token position, each with an associated log probability.

TopP

TopP controls the diversity of the completion. The higher the TopP, the more diverse the completion. Default is 1.0.

User

A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse

Methods

Clone()

Creates a new PromptExecutionSettings object that is a copy of the current instance.

Clone<T>()

Clone the settings object.

Freeze()

Makes the current PromptExecutionSettings unmodifiable and sets its IsFrozen property to true.

FromExecutionSettings(PromptExecutionSettings, Nullable<Int32>)

Create a new settings object with the values from another settings object.

ThrowIfFrozen()

Throws an InvalidOperationException if the PromptExecutionSettings are frozen.

(Inherited from PromptExecutionSettings)

Applies to