FunctionCallMessage class

An assistant message containing a function to call.

Extends

Remarks

The function call information is returned by the model so we use an "assistant" message to represent it in conversation history.

Constructors

FunctionCallMessage(FunctionCall, number, string)

Creates a new 'FunctionCallMessage' instance.

Properties

function_call

Name and arguments of the function to call.

Inherited Properties

required

If true the section is mandatory otherwise it can be safely dropped.

separator
textPrefix
tokens

The requested token budget for this section.

  • Values between 0.0 and 1.0 represent a percentage of the total budget and the section will be layed out proportionally to all other sections.
  • Values greater than 1.0 represent the max number of tokens the section should be allowed to consume.

Inherited Methods

getMessageText(Message<string>)

Returns the content of a message as a string.

renderAsText(TurnContext, Memory, PromptFunctions, Tokenizer, number)

Renders the prompt section as a string of text.

Constructor Details

FunctionCallMessage(FunctionCall, number, string)

Creates a new 'FunctionCallMessage' instance.

new FunctionCallMessage(function_call: FunctionCall, tokens?: number, assistantPrefix?: string)

Parameters

function_call
FunctionCall

name and arguments of the function to call.

tokens

number

Optional. Sizing strategy for this section. Defaults to auto.

assistantPrefix

string

Optional. Prefix to use for assistant messages when rendering as text. Defaults to assistant: .

Property Details

function_call

Name and arguments of the function to call.

function_call: FunctionCall

Property Value

Inherited Property Details

required

If true the section is mandatory otherwise it can be safely dropped.

required: boolean

Property Value

boolean

Inherited From PromptSectionBase.required

separator

separator: string

Property Value

string

Inherited From PromptSectionBase.separator

textPrefix

textPrefix: string

Property Value

string

Inherited From PromptSectionBase.textPrefix

tokens

The requested token budget for this section.

  • Values between 0.0 and 1.0 represent a percentage of the total budget and the section will be layed out proportionally to all other sections.
  • Values greater than 1.0 represent the max number of tokens the section should be allowed to consume.
tokens: number

Property Value

number

Inherited From PromptSectionBase.tokens

Inherited Method Details

getMessageText(Message<string>)

Returns the content of a message as a string.

static function getMessageText(message: Message<string>): string

Parameters

message

Message<string>

Message to get the text of.

Returns

string

The message content as a string.

Inherited From PromptSectionBase.getMessageText

renderAsText(TurnContext, Memory, PromptFunctions, Tokenizer, number)

Renders the prompt section as a string of text.

function renderAsText(context: TurnContext, memory: Memory, functions: PromptFunctions, tokenizer: Tokenizer, maxTokens: number): Promise<RenderedPromptSection<string>>

Parameters

context

TurnContext

Context for the current turn of conversation.

memory
Memory

Interface for accessing state variables.

functions
PromptFunctions

Functions for rendering prompts.

tokenizer
Tokenizer

Tokenizer to use for encoding/decoding text.

maxTokens

number

Maximum number of tokens allowed for the rendered prompt.

Returns

Promise<RenderedPromptSection<string>>

The rendered prompt section.

Inherited From PromptSectionBase.renderAsText