Azure.Search.Documents.Indexes.Models Namespace
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
Classes
AIServicesAccountIdentity |
The multi-region account of an Azure AI service resource that's attached to a skillset. |
AIServicesAccountKey |
The account key of an Azure AI service resource that's attached to a skillset, to be used with the resource's subdomain. |
AIServicesVisionParameters |
Specifies the AI Services Vision parameters for vectorizing a query image or text. |
AIServicesVisionVectorizer |
Specifies the AI Services Vision parameters for vectorizing a query image or text. |
AnalyzedTokenInfo |
Information about a token returned by an analyzer. |
AnalyzeTextOptions |
Specifies some text and analysis components used to break that text into tokens. |
AsciiFoldingTokenFilter |
Converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if such equivalents exist. This token filter is implemented using Apache Lucene. |
AzureMachineLearningParameters |
Specifies the properties for connecting to an AML vectorizer. |
AzureMachineLearningSkill |
The AML skill allows you to extend AI enrichment with a custom Azure Machine Learning (AML) model. Once an AML model is trained and deployed, an AML skill integrates it into AI enrichment. |
AzureMachineLearningVectorizer |
Specifies an Azure Machine Learning endpoint deployed via the Azure AI Studio Model Catalog for generating the vector embedding of a query string. |
AzureOpenAIEmbeddingSkill |
Allows you to generate a vector embedding for a given text input using the Azure OpenAI resource. |
AzureOpenAITokenizerParameters |
The AzureOpenAITokenizerParameters. |
AzureOpenAIVectorizer |
Specifies the Azure OpenAI resource used to vectorize a query string. |
AzureOpenAIVectorizerParameters |
Specifies the parameters for connecting to the Azure OpenAI resource. |
BinaryQuantizationCompression |
Contains configuration options specific to the binary quantization compression method used during indexing and querying. |
BM25Similarity |
Ranking function based on the Okapi BM25 similarity algorithm. BM25 is a TF-IDF-like algorithm that includes length normalization (controlled by the 'b' parameter) as well as term frequency saturation (controlled by the 'k1' parameter). |
CharFilter |
Base type for character filters. Please note CharFilter is the base class. According to the scenario, a derived class of the base class might need to be assigned here, or this property needs to be casted to one of the possible derived classes. The available derived classes include MappingCharFilter and PatternReplaceCharFilter. |
CjkBigramTokenFilter |
Forms bigrams of CJK terms that are generated from the standard tokenizer. This token filter is implemented using Apache Lucene. |
ClassicSimilarity |
Legacy similarity algorithm which uses the Lucene TFIDFSimilarity implementation of TF-IDF. This variation of TF-IDF introduces static document length normalization as well as coordinating factors that penalize documents that only partially match the searched queries. |
ClassicTokenizer |
Grammar-based tokenizer that is suitable for processing most European-language documents. This tokenizer is implemented using Apache Lucene. |
CognitiveServicesAccount |
Base type for describing any Azure AI service resource attached to a skillset. Please note CognitiveServicesAccount is the base class. According to the scenario, a derived class of the base class might need to be assigned here, or this property needs to be casted to one of the possible derived classes. The available derived classes include AIServicesAccountIdentity, AIServicesAccountKey, CognitiveServicesAccountKey and DefaultCognitiveServicesAccount. |
CognitiveServicesAccountKey |
The multi-region account key of an Azure AI service resource that's attached to a skillset. |
CommonGramTokenFilter |
Construct bigrams for frequently occurring terms while indexing. Single terms are still indexed too, with bigrams overlaid. This token filter is implemented using Apache Lucene. |
ComplexField |
A complex field or collection of complex fields that contain child fields. Child fields may be SimpleField or ComplexField. |
ConditionalSkill |
A skill that enables scenarios that require a Boolean operation to determine the data to assign to an output. |
CorsOptions |
Defines options to control Cross-Origin Resource Sharing (CORS) for an index. |
CustomAnalyzer |
Allows you to take control over the process of converting text into indexable/searchable tokens. It's a user-defined configuration consisting of a single predefined tokenizer and one or more filters. The tokenizer is responsible for breaking text into tokens, and the filters for modifying tokens emitted by the tokenizer. |
CustomEntity |
An object that contains information about the matches that were found, and related metadata. |
CustomEntityAlias |
A complex object that can be used to specify alternative spellings or synonyms to the root entity name. |
CustomEntityLookupSkill |
A skill looks for text from a custom, user-defined list of words and phrases. |
CustomNormalizer |
Allows you to configure normalization for filterable, sortable, and facetable fields, which by default operate with strict matching. This is a user-defined configuration consisting of at least one or more filters, which modify the token that is stored. |
DataChangeDetectionPolicy |
Base type for data change detection policies. Please note DataChangeDetectionPolicy is the base class. According to the scenario, a derived class of the base class might need to be assigned here, or this property needs to be casted to one of the possible derived classes. The available derived classes include HighWaterMarkChangeDetectionPolicy and SqlIntegratedChangeTrackingPolicy. |
DataDeletionDetectionPolicy |
Base type for data deletion detection policies. Please note DataDeletionDetectionPolicy is the base class. According to the scenario, a derived class of the base class might need to be assigned here, or this property needs to be casted to one of the possible derived classes. The available derived classes include NativeBlobSoftDeleteDeletionDetectionPolicy and SoftDeleteColumnDeletionDetectionPolicy. |
DefaultCognitiveServicesAccount |
An empty object that represents the default Azure AI service resource for a skillset. |
DictionaryDecompounderTokenFilter |
Decomposes compound words found in many Germanic languages. This token filter is implemented using Apache Lucene. |
DistanceScoringFunction |
Defines a function that boosts scores based on distance from a geographic location. |
DistanceScoringParameters |
Provides parameter values to a distance scoring function. |
DocumentExtractionSkill |
A skill that extracts content from a file within the enrichment pipeline. |
DocumentIntelligenceLayoutSkill |
A skill that extracts content and layout information (as markdown), via Azure AI Services, from files within the enrichment pipeline. |
EdgeNGramTokenFilter |
Generates n-grams of the given size(s) starting from the front or the back of an input token. This token filter is implemented using Apache Lucene. |
EdgeNGramTokenizer |
Tokenizes the input from an edge into n-grams of the given size(s). This tokenizer is implemented using Apache Lucene. |
ElisionTokenFilter |
Removes elisions. For example, "l'avion" (the plane) will be converted to "avion" (plane). This token filter is implemented using Apache Lucene. |
EntityLinkingSkill |
Using the Text Analytics API, extracts linked entities from text. |
EntityRecognitionSkill |
This skill is deprecated. Use the V3.EntityRecognitionSkill instead. |
ExhaustiveKnnAlgorithmConfiguration |
Contains configuration options specific to the exhaustive KNN algorithm used during querying, which will perform brute-force search across the entire vector index. |
ExhaustiveKnnParameters |
Contains the parameters specific to exhaustive KNN algorithm. |
FieldMapping |
Defines a mapping between a field in a data source and a target field in an index. |
FieldMappingFunction |
Represents a function that transforms a value from a data source before indexing. |
FreshnessScoringFunction |
Defines a function that boosts scores based on the value of a date-time field. |
FreshnessScoringParameters |
Provides parameter values to a freshness scoring function. |
HighWaterMarkChangeDetectionPolicy |
Defines a data change detection policy that captures changes based on the value of a high water mark column. |
HnswAlgorithmConfiguration |
Contains configuration options specific to the HNSW approximate nearest neighbors algorithm used during indexing and querying. The HNSW algorithm offers a tunable trade-off between search speed and accuracy. |
HnswParameters |
Contains the parameters specific to the HNSW algorithm. |
ImageAnalysisSkill |
A skill that analyzes image files. It extracts a rich set of visual features based on the image content. |
IndexerChangeTrackingState |
Represents the change tracking state during an indexer's execution. |
IndexerExecutionResult |
Represents the result of an individual indexer execution. |
IndexerState |
Represents all of the state that defines and dictates the indexer's current execution. |
IndexingParameters |
Represents parameters for indexer execution. |
IndexingParametersConfiguration |
A dictionary of indexer-specific configuration properties. Each name is the name of a specific property. Each value must be of a primitive type. |
IndexingSchedule |
Represents a schedule for indexer execution. |
InputFieldMappingEntry |
Input field mapping for a skill. |
KeepTokenFilter |
A token filter that only keeps tokens with text contained in a specified list of words. This token filter is implemented using Apache Lucene. |
KeyPhraseExtractionSkill |
A skill that uses text analytics for key phrase extraction. |
KeywordMarkerTokenFilter |
Marks terms as keywords. This token filter is implemented using Apache Lucene. |
KeywordTokenizer |
Emits the entire input as a single token. This tokenizer is implemented using Apache Lucene. |
KnowledgeStore |
Definition of additional projections to azure blob, table, or files, of enriched data. |
KnowledgeStoreFileProjectionSelector |
Projection definition for what data to store in Azure Files. |
KnowledgeStoreObjectProjectionSelector |
Projection definition for what data to store in Azure Blob. |
KnowledgeStoreProjection |
Container object for various projection selectors. |
KnowledgeStoreProjectionSelector |
Abstract class to share properties between concrete selectors. |
KnowledgeStoreStorageProjectionSelector |
Abstract class to share properties between concrete selectors. |
KnowledgeStoreTableProjectionSelector |
Description for what data to store in Azure Tables. |
LanguageDetectionSkill |
A skill that detects the language of input text and reports a single language code for every document submitted on the request. The language code is paired with a score indicating the confidence of the analysis. |
LengthTokenFilter |
Removes words that are too long or too short. This token filter is implemented using Apache Lucene. |
LexicalAnalyzer |
Base type for analyzers. Please note LexicalAnalyzer is the base class. According to the scenario, a derived class of the base class might need to be assigned here, or this property needs to be casted to one of the possible derived classes. The available derived classes include CustomAnalyzer, PatternAnalyzer, LuceneStandardAnalyzer and StopAnalyzer. |
LexicalAnalyzerName.Values |
The values of all declared LexicalAnalyzerName properties as string constants. These can be used in SearchableFieldAttribute and anywhere else constants are required. |
LexicalNormalizer |
Base type for normalizers. Please note LexicalNormalizer is the base class. According to the scenario, a derived class of the base class might need to be assigned here, or this property needs to be casted to one of the possible derived classes. The available derived classes include CustomNormalizer. |
LexicalNormalizerName.Values |
The values of all declared LexicalNormalizerName properties as string constants. These can be used in SimpleFieldAttribute, SearchableFieldAttribute and anywhere else constants are required. |
LexicalTokenizer |
Base type for tokenizers. Please note LexicalTokenizer is the base class. According to the scenario, a derived class of the base class might need to be assigned here, or this property needs to be casted to one of the possible derived classes. The available derived classes include ClassicTokenizer, EdgeNGramTokenizer, KeywordTokenizer, KeywordTokenizer, MicrosoftLanguageStemmingTokenizer, MicrosoftLanguageTokenizer, NGramTokenizer, PathHierarchyTokenizer, PatternTokenizer, LuceneStandardTokenizer, LuceneStandardTokenizer and UaxUrlEmailTokenizer. |
LimitTokenFilter |
Limits the number of tokens while indexing. This token filter is implemented using Apache Lucene. |
LuceneStandardAnalyzer |
Standard Apache Lucene analyzer; Composed of the standard tokenizer, lowercase filter and stop filter. |
LuceneStandardTokenizer |
Breaks text following the Unicode Text Segmentation rules. This tokenizer is implemented using Apache Lucene. |
MagnitudeScoringFunction |
Defines a function that boosts scores based on the magnitude of a numeric field. |
MagnitudeScoringParameters |
Provides parameter values to a magnitude scoring function. |
MappingCharFilter |
A character filter that applies mappings defined with the mappings option. Matching is greedy (longest pattern matching at a given point wins). Replacement is allowed to be the empty string. This character filter is implemented using Apache Lucene. |
MergeSkill |
A skill for merging two or more strings into a single unified string, with an optional user-defined delimiter separating each component part. |
MicrosoftLanguageStemmingTokenizer |
Divides text using language-specific rules and reduces words to their base forms. |
MicrosoftLanguageTokenizer |
Divides text using language-specific rules. |
NativeBlobSoftDeleteDeletionDetectionPolicy |
Defines a data deletion detection policy utilizing Azure Blob Storage's native soft delete feature for deletion detection. |
NGramTokenFilter |
Generates n-grams of the given size(s). This token filter is implemented using Apache Lucene. |
NGramTokenizer |
Tokenizes the input into n-grams of the given size(s). This tokenizer is implemented using Apache Lucene. |
OcrSkill |
A skill that extracts text from image files. |
OutputFieldMappingEntry |
Output field mapping for a skill. |
PathHierarchyTokenizer |
Tokenizer for path-like hierarchies. This tokenizer is implemented using Apache Lucene. |
PatternAnalyzer |
Flexibly separates text into terms via a regular expression pattern. This analyzer is implemented using Apache Lucene. |
PatternCaptureTokenFilter |
Uses Java regexes to emit multiple tokens - one for each capture group in one or more patterns. This token filter is implemented using Apache Lucene. |
PatternReplaceCharFilter |
A character filter that replaces characters in the input string. It uses a regular expression to identify character sequences to preserve and a replacement pattern to identify characters to replace. For example, given the input text "aa bb aa bb", pattern "(aa)\s+(bb)", and replacement "$1#$2", the result would be "aa#bb aa#bb". This character filter is implemented using Apache Lucene. |
PatternReplaceTokenFilter |
A character filter that replaces characters in the input string. It uses a regular expression to identify character sequences to preserve and a replacement pattern to identify characters to replace. For example, given the input text "aa bb aa bb", pattern "(aa)\s+(bb)", and replacement "$1#$2", the result would be "aa#bb aa#bb". This token filter is implemented using Apache Lucene. |
PatternTokenizer |
Tokenizer that uses regex pattern matching to construct distinct tokens. This tokenizer is implemented using Apache Lucene. |
PhoneticTokenFilter |
Create tokens for phonetic matches. This token filter is implemented using Apache Lucene. |
PiiDetectionSkill |
Using the Text Analytics API, extracts personal information from an input text and gives you the option of masking it. |
RescoringOptions |
Contains the options for rescoring. |
ScalarQuantizationCompression |
Contains configuration options specific to the scalar quantization compression method used during indexing and querying. |
ScalarQuantizationParameters |
Contains the parameters specific to Scalar Quantization. |
ScoringFunction |
Base type for functions that can modify document scores during ranking. Please note ScoringFunction is the base class. According to the scenario, a derived class of the base class might need to be assigned here, or this property needs to be casted to one of the possible derived classes. The available derived classes include DistanceScoringFunction, FreshnessScoringFunction, MagnitudeScoringFunction and TagScoringFunction. |
ScoringProfile |
Defines parameters for a search index that influence scoring in search queries. |
SearchableField |
A String or "Collection(String)" field that can be searched. |
SearchAlias |
Represents an index alias, which describes a mapping from the alias name to an index. The alias name can be used in place of the index name for supported operations. |
SearchField |
Represents a field in an index definition, which describes the name, data type, and search behavior of a field. |
SearchFieldTemplate |
Base field type for helper classes to more easily create a SearchIndex. |
SearchIndex |
Represents a search index definition, which describes the fields and search behavior of an index. |
SearchIndexer |
Represents an indexer. |
SearchIndexerCache |
The SearchIndexerCache. |
SearchIndexerDataContainer |
Represents information about the entity (such as Azure SQL table or CosmosDB collection) that will be indexed. |
SearchIndexerDataIdentity |
Abstract base type for data identities. Please note SearchIndexerDataIdentity is the base class. According to the scenario, a derived class of the base class might need to be assigned here, or this property needs to be casted to one of the possible derived classes. The available derived classes include SearchIndexerDataNoneIdentity and SearchIndexerDataUserAssignedIdentity. |
SearchIndexerDataNoneIdentity |
Clears the identity property of a datasource. |
SearchIndexerDataSourceConnection |
Represents a datasource definition, which can be used to configure an indexer. |
SearchIndexerDataUserAssignedIdentity |
Specifies the identity for a datasource to use. |
SearchIndexerError |
Represents an item- or document-level indexing error. |
SearchIndexerIndexProjection |
Definition of additional projections to secondary search indexes. |
SearchIndexerIndexProjectionSelector |
Description for what data to store in the designated search index. |
SearchIndexerIndexProjectionsParameters |
A dictionary of index projection-specific configuration properties. Each name is the name of a specific property. Each value must be of a primitive type. |
SearchIndexerKnowledgeStoreParameters |
A dictionary of knowledge store-specific configuration properties. Each name is the name of a specific property. Each value must be of a primitive type. |
SearchIndexerLimits |
The SearchIndexerLimits. |
SearchIndexerSkill |
Base type for skills. Please note SearchIndexerSkill is the base class. According to the scenario, a derived class of the base class might need to be assigned here, or this property needs to be casted to one of the possible derived classes. The available derived classes include AzureMachineLearningSkill, WebApiSkill, AzureOpenAIEmbeddingSkill, CustomEntityLookupSkill, EntityRecognitionSkill, KeyPhraseExtractionSkill, LanguageDetectionSkill, MergeSkill, PiiDetectionSkill, SentimentSkill, SplitSkill, TextTranslationSkill, EntityLinkingSkill, ConditionalSkill, DocumentExtractionSkill, DocumentIntelligenceLayoutSkill, ShaperSkill, ImageAnalysisSkill, OcrSkill and VisionVectorizeSkill. |
SearchIndexerSkillset |
A list of skills. |
SearchIndexerStatus |
Represents the current status and execution history of an indexer. |
SearchIndexerWarning |
Represents an item-level warning. |
SearchIndexStatistics |
Statistics for a given index. Statistics are collected periodically and are not guaranteed to always be up-to-date. |
SearchResourceCounter |
Represents a resource's usage and quota. |
SearchResourceEncryptionKey |
A customer-managed encryption key in Azure Key Vault. Keys that you create and manage can be used to encrypt or decrypt data-at-rest, such as indexes and synonym maps. |
SearchServiceCounters |
Represents service-level resource counters and quotas. |
SearchServiceLimits |
Represents various service level limits. |
SearchServiceStatistics |
Response from a get service statistics request. If successful, it includes service level counters and limits. |
SearchSuggester |
Defines how the Suggest API should apply to a group of fields in the index. |
SemanticConfiguration |
Defines a specific configuration to be used in the context of semantic capabilities. |
SemanticField |
A field that is used as part of the semantic configuration. |
SemanticPrioritizedFields |
Describes the title, content, and keywords fields to be used for semantic ranking, captions, highlights, and answers. |
SemanticSearch |
Defines parameters for a search index that influence semantic capabilities. |
SentimentSkill |
This skill is deprecated. Use the V3.SentimentSkill instead. |
ShaperSkill |
A skill for reshaping the outputs. It creates a complex type to support composite fields (also known as multipart fields). |
ShingleTokenFilter |
Creates combinations of tokens as a single token. This token filter is implemented using Apache Lucene. |
SimilarityAlgorithm |
Base type for similarity algorithms. Similarity algorithms are used to calculate scores that tie queries to documents. The higher the score, the more relevant the document is to that specific query. Those scores are used to rank the search results. Please note SimilarityAlgorithm is the base class. According to the scenario, a derived class of the base class might need to be assigned here, or this property needs to be casted to one of the possible derived classes. The available derived classes include BM25Similarity and ClassicSimilarity. |
SimpleField |
A simple field using a primitive type or a collection of a primitive type. |
SnowballTokenFilter |
A filter that stems words using a Snowball-generated stemmer. This token filter is implemented using Apache Lucene. |
SoftDeleteColumnDeletionDetectionPolicy |
Defines a data deletion detection policy that implements a soft-deletion strategy. It determines whether an item should be deleted based on the value of a designated 'soft delete' column. |
SplitSkill |
A skill to split a string into chunks of text. |
SqlIntegratedChangeTrackingPolicy |
Defines a data change detection policy that captures changes using the Integrated Change Tracking feature of Azure SQL Database. |
StemmerOverrideTokenFilter |
Provides the ability to override other stemming filters with custom dictionary-based stemming. Any dictionary-stemmed terms will be marked as keywords so that they will not be stemmed with stemmers down the chain. Must be placed before any stemming filters. This token filter is implemented using Apache Lucene. |
StemmerTokenFilter |
Language specific stemming filter. This token filter is implemented using Apache Lucene. |
StopAnalyzer |
Divides text at non-letters; Applies the lowercase and stopword token filters. This analyzer is implemented using Apache Lucene. |
StopwordsTokenFilter |
Removes stop words from a token stream. This token filter is implemented using Apache Lucene. |
SynonymMap |
Represents a synonym map definition. |
SynonymTokenFilter |
Matches single or multi-word synonyms in a token stream. This token filter is implemented using Apache Lucene. |
TagScoringFunction |
Defines a function that boosts scores of documents with string values matching a given list of tags. |
TagScoringParameters |
Provides parameter values to a tag scoring function. |
TextTranslationSkill |
A skill to translate text from one language to another. |
TextWeights |
Defines weights on index fields for which matches should boost scoring in search queries. |
TokenFilter |
Base type for token filters. Please note TokenFilter is the base class. According to the scenario, a derived class of the base class might need to be assigned here, or this property needs to be casted to one of the possible derived classes. The available derived classes include AsciiFoldingTokenFilter, CjkBigramTokenFilter, CommonGramTokenFilter, DictionaryDecompounderTokenFilter, EdgeNGramTokenFilter, EdgeNGramTokenFilter, ElisionTokenFilter, KeepTokenFilter, KeywordMarkerTokenFilter, LengthTokenFilter, LimitTokenFilter, NGramTokenFilter, NGramTokenFilter, PatternCaptureTokenFilter, PatternReplaceTokenFilter, PhoneticTokenFilter, ShingleTokenFilter, SnowballTokenFilter, StemmerOverrideTokenFilter, StemmerTokenFilter, StopwordsTokenFilter, SynonymTokenFilter, TruncateTokenFilter, UniqueTokenFilter and WordDelimiterTokenFilter. |
TruncateTokenFilter |
Truncates the terms to a specific length. This token filter is implemented using Apache Lucene. |
UaxUrlEmailTokenizer |
Tokenizes urls and emails as one token. This tokenizer is implemented using Apache Lucene. |
UniqueTokenFilter |
Filters out tokens with same text as the previous token. This token filter is implemented using Apache Lucene. |
VectorEncodingFormat.Values |
The values of all declared VectorEncodingFormat properties as string constants. These can be used in VectorSearchFieldAttribute and anywhere else constants are required. |
VectorSearch |
Contains configuration options related to vector search. |
VectorSearchAlgorithmConfiguration |
Contains configuration options specific to the algorithm used during indexing or querying. Please note VectorSearchAlgorithmConfiguration is the base class. According to the scenario, a derived class of the base class might need to be assigned here, or this property needs to be casted to one of the possible derived classes. The available derived classes include ExhaustiveKnnAlgorithmConfiguration and HnswAlgorithmConfiguration. |
VectorSearchCompression |
Contains configuration options specific to the compression method used during indexing or querying. Please note VectorSearchCompression is the base class. According to the scenario, a derived class of the base class might need to be assigned here, or this property needs to be casted to one of the possible derived classes. The available derived classes include BinaryQuantizationCompression and ScalarQuantizationCompression. |
VectorSearchField |
A searchable vector field of type "Collection(Single)". |
VectorSearchProfile |
Defines a combination of configurations to use with vector search. |
VectorSearchVectorizer |
Specifies the vectorization method to be used during query time. Please note VectorSearchVectorizer is the base class. According to the scenario, a derived class of the base class might need to be assigned here, or this property needs to be casted to one of the possible derived classes. The available derived classes include AIServicesVisionVectorizer, AzureMachineLearningVectorizer, AzureOpenAIVectorizer and WebApiVectorizer. |
VisionVectorizeSkill |
Allows you to generate a vector embedding for a given image or text input using the Azure AI Services Vision Vectorize API. |
WebApiSkill |
A skill that can call a Web API endpoint, allowing you to extend a skillset by having it call your custom code. |
WebApiVectorizer |
Specifies a user-defined vectorizer for generating the vector embedding of a query string. Integration of an external vectorizer is achieved using the custom Web API interface of a skillset. |
WebApiVectorizerParameters |
Specifies the properties for connecting to a user-defined vectorizer. |
WordDelimiterTokenFilter |
Splits words into subwords and performs optional transformations on subword groups. This token filter is implemented using Apache Lucene. |
Structs
AIStudioModelCatalogName |
The name of the embedding model from the Azure AI Studio Catalog that will be called. |
AzureOpenAIModelName |
The Azure Open AI model name that will be called. |
BlobIndexerDataToExtract |
Specifies the data to extract from Azure blob storage and tells the indexer which data to extract from image content when "imageAction" is set to a value other than "none". This applies to embedded image content in a .PDF or other application, or image files such as .jpg and .png, in Azure blobs. |
BlobIndexerImageAction |
Determines how to process embedded images and image files in Azure blob storage. Setting the "imageAction" configuration to any value other than "none" requires that a skillset also be attached to that indexer. |
BlobIndexerParsingMode |
Represents the parsing mode for indexing from an Azure blob data source. |
BlobIndexerPdfTextRotationAlgorithm |
Determines algorithm for text extraction from PDF files in Azure blob storage. |
CharFilterName |
Defines the names of all character filters supported by the search engine. |
CustomEntityLookupSkillLanguage |
The language codes supported for input text by CustomEntityLookupSkill. |
DocumentIntelligenceLayoutSkillMarkdownHeaderDepth |
The depth of headers in the markdown output. Default is h6. |
DocumentIntelligenceLayoutSkillOutputMode |
Controls the cardinality of the output produced by the skill. Default is 'oneToMany'. |
EntityCategory |
A string indicating what entity categories to return. |
EntityRecognitionSkill.SkillVersion |
Represents service version information of an EntityRecognitionSkill. |
EntityRecognitionSkillLanguage |
Deprecated. The language codes supported for input text by EntityRecognitionSkill. |
ImageAnalysisSkillLanguage |
The language codes supported for input by ImageAnalysisSkill. |
ImageDetail |
A string indicating which domain-specific details to return. |
IndexerExecutionEnvironment |
Specifies the environment in which the indexer should execute. |
IndexerExecutionStatusDetail |
Details the status of an individual indexer execution. |
IndexingMode |
Represents the mode the indexer is executing in. |
IndexProjectionMode |
Defines behavior of the index projections in relation to the rest of the indexer. |
KeyPhraseExtractionSkillLanguage |
The language codes supported for input text by KeyPhraseExtractionSkill. |
LexicalAnalyzerName |
Defines the names of all text analyzers supported by the search engine. |
LexicalNormalizerName |
Defines the names of all text normalizers supported by the search engine. |
LexicalTokenizerName |
Defines the names of all tokenizers supported by the search engine. |
MarkdownHeaderDepth |
Specifies the max header depth that will be considered while grouping markdown content. Default is |
MarkdownParsingSubmode |
Specifies the submode that will determine whether a markdown file will be parsed into exactly one search document or multiple search documents. Default is |
OcrLineEnding |
Defines the sequence of characters to use between the lines of text recognized by the OCR skill. The default value is "space". |
OcrSkillLanguage |
The language codes supported for input by OcrSkill. |
PiiDetectionSkillMaskingMode |
A string indicating what maskingMode to use to mask the personal information detected in the input text. |
RegexFlag |
Defines flags that can be combined to control how regular expressions are used in the pattern analyzer and pattern tokenizer. |
SearchFieldDataType |
Defines the data type of a field in a search index. |
SearchIndexerDataSourceType |
Defines the type of a datasource. |
SentimentSkill.SkillVersion |
Represents service version information of an SentimentSkill. |
SentimentSkillLanguage |
Deprecated. The language codes supported for input text by SentimentSkill. |
SplitSkillEncoderModelName |
A value indicating which tokenizer to use. |
SplitSkillLanguage |
The language codes supported for input text by SplitSkill. |
SplitSkillUnit |
A value indicating which unit to use. |
TextSplitMode |
A value indicating which split mode to perform. |
TextTranslationSkillLanguage |
The language codes supported for input text by TextTranslationSkill. |
TokenFilterName |
Defines the names of all token filters supported by the search engine. |
VectorEncodingFormat |
The encoding format for interpreting vector field contents. |
VectorSearchAlgorithmMetric |
The similarity metric to use for vector comparisons. It is recommended to choose the same similarity metric as the embedding model was trained on. |
VectorSearchCompressionRescoreStorageMethod |
The storage method for the original full-precision vectors used for rescoring and internal index operations. |
VectorSearchCompressionTarget |
The quantized data type of compressed vector values. |
VisualFeature |
The strings indicating what visual feature types to return. |
Enums
CjkBigramTokenFilterScripts |
Scripts that can be ignored by CjkBigramTokenFilter. |
EdgeNGramTokenFilterSide |
Specifies which side of the input an n-gram should be generated from. |
IndexerExecutionStatus |
Represents the status of an individual indexer execution. |
IndexerStatus |
Represents the overall indexer status. |
MicrosoftStemmingTokenizerLanguage |
Lists the languages supported by the Microsoft language stemming tokenizer. |
MicrosoftTokenizerLanguage |
Lists the languages supported by the Microsoft language tokenizer. |
PhoneticEncoder |
Identifies the type of phonetic encoder to use with a PhoneticTokenFilter. |
ScoringFunctionAggregation |
Defines the aggregation function used to combine the results of all the scoring functions in a scoring profile. |
ScoringFunctionInterpolation |
Defines the function used to interpolate score boosting across a range of documents. |
SnowballTokenFilterLanguage |
The language to use for a Snowball token filter. |
StemmerTokenFilterLanguage |
The language to use for a stemmer token filter. |
StopwordsList |
Identifies a predefined list of language-specific stopwords. |
TokenCharacterKind |
Represents classes of characters on which a token filter can operate. |