Tokenizer that uses regex pattern matching to construct distinct tokens. This token filter is implemented using Apache Lucene.įlexibly separates text into terms via a regular expression pattern. ![]() Provides parameter values to a magnitude scoring function. This tokenizer is implemented using Apache Lucene.Īrguments for retrieving the next page of search results.īreaks text following the Unicode Text Segmentation rules. Represents a schedule for indexer execution.Įmits the entire input as a single token. Status of an indexing operation for a single document. Each name is the name of a specific property. Represents parameters for indexer execution.Ī dictionary of indexer-specific configuration properties. Represents the result of an individual indexer execution. Response containing the status of operations for all documents in the indexing request. Options for the modify index batch operation. Options for retrieving a single document. ![]() Provides parameter values to a freshness scoring function. Represents a function that transforms a value from a data source before indexing. Reports the number of documents with a field value falling within a particular range or having a particular value or interval.ĭefines a mapping between a field in a data source and a target field in an index. This token filter is implemented using Apache Lucene.Ī single bucket of a facet query result. Generates n-grams of the given size(s) starting from the front or the back of an input token. Provides parameter values to a distance scoring function. Modifying tokens emitted by the tokenizer.Īn object that contains information about the matches that were found, and related metadata.Ī complex object that can be used to specify alternative spellings or synonyms to the root entity name. The tokenizer is responsible for breaking text into tokens, and the filters for It's a user-defined configuration consisting of a single predefined tokenizer and one or moreįilters. Options for create/update indexer operation.Īllows you to take control over the process of converting text into indexable/searchable tokens. Options for create/update datasource operation. Options for create/update synonymmap operation.ĬreateorUpdateDataSourceConnectionOptions Options for create/update skillset operation. Options for create/update index operation. Represents a field in an index definition, which describes the name, data type, and searchĭefines options to control Cross-Origin Resource Sharing (CORS) for an index. Parameters for fuzzy matching, and other autocomplete query behaviors.ĪzureActiveDirectoryApplicationCredentialsĬredentials of a registered application created for your search service, used for authenticated access to the encryption keys stored in Azure Key Vault.īase type for describing any cognitive service resource attached to a skillset.īase type for data change detection policies.īase type for data deletion detection policies.īase type for functions that can modify document scores during ranking. Information about a token returned by an analyzer. The result of testing an analyzer on text. Specifies some text and analysis components used to break that text into tokens. Including adding, updating, and removing them. Including querying documents in the index as well asĬlass used to perform buffered operations against a search index, Represents a geographic point in global coordinates.Ĭlass used to perform operations against a search index, ![]() In this article Classes AzureKe圜redentialĪ static-key-based credential that supports updating
0 Comments
Leave a Reply. |