Share via


CustomAnalyzer Constructors

Definition

Overloads

CustomAnalyzer()

Initializes a new instance of the CustomAnalyzer class.

CustomAnalyzer(String, TokenizerName, IList<TokenFilterName>, IList<CharFilterName>)

Initializes a new instance of the CustomAnalyzer class.

CustomAnalyzer()

Source:
CustomAnalyzer.cs

Initializes a new instance of the CustomAnalyzer class.

public CustomAnalyzer ();
Public Sub New ()

Applies to

CustomAnalyzer(String, TokenizerName, IList<TokenFilterName>, IList<CharFilterName>)

Source:
CustomAnalyzer.cs

Initializes a new instance of the CustomAnalyzer class.

public CustomAnalyzer (string name, Microsoft.Azure.Search.Models.TokenizerName tokenizer, System.Collections.Generic.IList<Microsoft.Azure.Search.Models.TokenFilterName> tokenFilters = default, System.Collections.Generic.IList<Microsoft.Azure.Search.Models.CharFilterName> charFilters = default);
new Microsoft.Azure.Search.Models.CustomAnalyzer : string * Microsoft.Azure.Search.Models.TokenizerName * System.Collections.Generic.IList<Microsoft.Azure.Search.Models.TokenFilterName> * System.Collections.Generic.IList<Microsoft.Azure.Search.Models.CharFilterName> -> Microsoft.Azure.Search.Models.CustomAnalyzer
Public Sub New (name As String, tokenizer As TokenizerName, Optional tokenFilters As IList(Of TokenFilterName) = Nothing, Optional charFilters As IList(Of CharFilterName) = Nothing)

Parameters

name
String

The name of the analyzer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.

tokenizer
TokenizerName

The name of the tokenizer to use to divide continuous text into a sequence of tokens, such as breaking a sentence into words. Possible values include: 'classic', 'edgeNGram', 'keyword_v2', 'letter', 'lowercase', 'microsoft_language_tokenizer', 'microsoft_language_stemming_tokenizer', 'nGram', 'path_hierarchy_v2', 'pattern', 'standard_v2', 'uax_url_email', 'whitespace'

tokenFilters
IList<TokenFilterName>

A list of token filters used to filter out or modify the tokens generated by a tokenizer. For example, you can specify a lowercase filter that converts all characters to lowercase. The filters are run in the order in which they are listed.

charFilters
IList<CharFilterName>

A list of character filters used to prepare input text before it is processed by the tokenizer. For instance, they can replace certain characters or symbols. The filters are run in the order in which they are listed.

Applies to