New Case Study:See how Anthropic automated 95% of dependency reviews with Socket.Learn More
Socket
Sign inDemoInstall
Socket

@elastic/elasticsearch

Package Overview
Dependencies
Maintainers
67
Versions
121
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@elastic/elasticsearch - npm Package Compare versions

Comparing version 8.17.0 to 8.17.1

2

lib/api/api/bulk.d.ts

@@ -8,3 +8,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Bulk index or delete documents. Performs multiple indexing or delete operations in a single API call. This reduces overhead and can greatly increase indexing speed.
* Bulk index or delete documents. Perform multiple `index`, `create`, `delete`, and `update` actions in a single request. This reduces overhead and can greatly increase indexing speed. If the Elasticsearch security features are enabled, you must have the following index privileges for the target data stream, index, or index alias: * To use the `create` action, you must have the `create_doc`, `create`, `index`, or `write` index privilege. Data streams support only the `create` action. * To use the `index` action, you must have the `create`, `index`, or `write` index privilege. * To use the `delete` action, you must have the `delete` or `write` index privilege. * To use the `update` action, you must have the `index` or `write` index privilege. * To automatically create a data stream or index with a bulk API request, you must have the `auto_configure`, `create_index`, or `manage` index privilege. * To make the result of a bulk operation visible to search using the `refresh` parameter, you must have the `maintenance` or `manage` index privilege. Automatic data stream creation requires a matching index template with data stream enabled. The actions are specified in the request body using a newline delimited JSON (NDJSON) structure: ``` action_and_meta_data\n optional_source\n action_and_meta_data\n optional_source\n .... action_and_meta_data\n optional_source\n ``` The `index` and `create` actions expect a source on the next line and have the same semantics as the `op_type` parameter in the standard index API. A `create` action fails if a document with the same ID already exists in the target An `index` action adds or replaces a document as necessary. NOTE: Data streams support only the `create` action. To update or delete a document in a data stream, you must target the backing index containing the document. An `update` action expects that the partial doc, upsert, and script and its options are specified on the next line. A `delete` action does not expect a source on the next line and has the same semantics as the standard delete API. NOTE: The final line of data must end with a newline character (`\n`). Each newline character may be preceded by a carriage return (`\r`). When sending NDJSON data to the `_bulk` endpoint, use a `Content-Type` header of `application/json` or `application/x-ndjson`. Because this format uses literal newline characters (`\n`) as delimiters, make sure that the JSON actions and sources are not pretty printed. If you provide a target in the request path, it is used for any actions that don't explicitly specify an `_index` argument. A note on the format: the idea here is to make processing as fast as possible. As some of the actions are redirected to other shards on other nodes, only `action_meta_data` is parsed on the receiving node side. Client libraries using this protocol should try and strive to do something similar on the client side, and reduce buffering as much as possible. There is no "correct" number of actions to perform in a single bulk request. Experiment with different settings to find the optimal size for your particular workload. Note that Elasticsearch limits the maximum size of a HTTP request to 100mb by default so clients must ensure that no request exceeds this size. It is not possible to index a single document that exceeds the size limit, so you must pre-process any such documents into smaller pieces before sending them to Elasticsearch. For instance, split documents into pages or chapters before indexing them, or store raw binary data in a system outside Elasticsearch and replace the raw data with a link to the external system in the documents that you send to Elasticsearch. **Client suppport for bulk requests** Some of the officially supported clients provide helpers to assist with bulk requests and reindexing: * Go: Check out `esutil.BulkIndexer` * Perl: Check out `Search::Elasticsearch::Client::5_0::Bulk` and `Search::Elasticsearch::Client::5_0::Scroll` * Python: Check out `elasticsearch.helpers.*` * JavaScript: Check out `client.helpers.*` * .NET: Check out `BulkAllObservable` * PHP: Check out bulk indexing. **Submitting bulk requests with cURL** If you're providing text file input to `curl`, you must use the `--data-binary` flag instead of plain `-d`. The latter doesn't preserve newlines. For example: ``` $ cat requests { "index" : { "_index" : "test", "_id" : "1" } } { "field1" : "value1" } $ curl -s -H "Content-Type: application/x-ndjson" -XPOST localhost:9200/_bulk --data-binary "@requests"; echo {"took":7, "errors": false, "items":[{"index":{"_index":"test","_id":"1","_version":1,"result":"created","forced_refresh":false}}]} ``` **Optimistic concurrency control** Each `index` and `delete` action within a bulk API call may include the `if_seq_no` and `if_primary_term` parameters in their respective action and meta data lines. The `if_seq_no` and `if_primary_term` parameters control how operations are run, based on the last modification to existing documents. See Optimistic concurrency control for more details. **Versioning** Each bulk item can include the version value using the `version` field. It automatically follows the behavior of the index or delete operation based on the `_version` mapping. It also support the `version_type`. **Routing** Each bulk item can include the routing value using the `routing` field. It automatically follows the behavior of the index or delete operation based on the `_routing` mapping. NOTE: Data streams do not support custom routing unless they were created with the `allow_custom_routing` setting enabled in the template. **Wait for active shards** When making bulk calls, you can set the `wait_for_active_shards` parameter to require a minimum number of shard copies to be active before starting to process the bulk request. **Refresh** Control when the changes made by this request are visible to search. NOTE: Only the shards that receive the bulk request will be affected by refresh. Imagine a `_bulk?refresh=wait_for` request with three documents in it that happen to be routed to different shards in an index with five shards. The request will only wait for those three shards to refresh. The other two shards that make up the index do not participate in the `_bulk` request at all.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/docs-bulk.html | Elasticsearch API documentation}

@@ -11,0 +11,0 @@ */

@@ -11,3 +11,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Get aliases. Retrieves the cluster’s index aliases, including filter and routing information. The API does not return data stream aliases. CAT APIs are only intended for human consumption using the command line or the Kibana console. They are not intended for use by applications. For application consumption, use the aliases API.
* Get aliases. Get the cluster's index aliases, including filter and routing information. This API does not return data stream aliases. IMPORTANT: CAT APIs are only intended for human consumption using the command line or the Kibana console. They are not intended for use by applications. For application consumption, use the aliases API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cat-alias.html | Elasticsearch API documentation}

@@ -19,3 +19,3 @@ */

/**
* Provides a snapshot of the number of shards allocated to each data node and their disk space. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications.
* Get shard allocation information. Get a snapshot of the number of shards allocated to each data node and their disk space. IMPORTANT: CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cat-allocation.html | Elasticsearch API documentation}

@@ -27,3 +27,3 @@ */

/**
* Get component templates. Returns information about component templates in a cluster. Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases. CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get component template API.
* Get component templates. Get information about component templates in a cluster. Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases. IMPORTANT: CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get component template API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cat-component-templates.html | Elasticsearch API documentation}

@@ -35,3 +35,3 @@ */

/**
* Get a document count. Provides quick access to a document count for a data stream, an index, or an entire cluster. The document count only includes live documents, not deleted documents which have not yet been removed by the merge process. CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the count API.
* Get a document count. Get quick access to a document count for a data stream, an index, or an entire cluster. The document count only includes live documents, not deleted documents which have not yet been removed by the merge process. IMPORTANT: CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the count API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cat-count.html | Elasticsearch API documentation}

@@ -43,3 +43,3 @@ */

/**
* Returns the amount of heap memory currently used by the field data cache on every data node in the cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes stats API.
* Get field data cache information. Get the amount of heap memory currently used by the field data cache on every data node in the cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes stats API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cat-fielddata.html | Elasticsearch API documentation}

@@ -51,3 +51,3 @@ */

/**
* Returns the health status of a cluster, similar to the cluster health API. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the cluster health API. This API is often used to check malfunctioning clusters. To help you track cluster health alongside log files and alerting systems, the API returns timestamps in two formats: `HH:MM:SS`, which is human-readable but includes no date information; `Unix epoch time`, which is machine-sortable and includes date information. The latter format is useful for cluster recoveries that take multiple days. You can use the cat health API to verify cluster health across multiple nodes. You also can use the API to track the recovery of a large cluster over a longer period of time.
* Get the cluster health status. IMPORTANT: CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the cluster health API. This API is often used to check malfunctioning clusters. To help you track cluster health alongside log files and alerting systems, the API returns timestamps in two formats: `HH:MM:SS`, which is human-readable but includes no date information; `Unix epoch time`, which is machine-sortable and includes date information. The latter format is useful for cluster recoveries that take multiple days. You can use the cat health API to verify cluster health across multiple nodes. You also can use the API to track the recovery of a large cluster over a longer period of time.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cat-health.html | Elasticsearch API documentation}

@@ -59,3 +59,3 @@ */

/**
* Get CAT help. Returns help for the CAT APIs.
* Get CAT help. Get help for the CAT APIs.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cat.html | Elasticsearch API documentation}

@@ -67,3 +67,3 @@ */

/**
* Get index information. Returns high-level information about indices in a cluster, including backing indices for data streams. Use this request to get the following information for each index in a cluster: - shard count - document count - deleted document count - primary store size - total store size of all shards, including shard replicas These metrics are retrieved directly from Lucene, which Elasticsearch uses internally to power indexing and search. As a result, all document counts include hidden nested documents. To get an accurate count of Elasticsearch documents, use the cat count or count APIs. CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use an index endpoint.
* Get index information. Get high-level information about indices in a cluster, including backing indices for data streams. Use this request to get the following information for each index in a cluster: - shard count - document count - deleted document count - primary store size - total store size of all shards, including shard replicas These metrics are retrieved directly from Lucene, which Elasticsearch uses internally to power indexing and search. As a result, all document counts include hidden nested documents. To get an accurate count of Elasticsearch documents, use the cat count or count APIs. CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use an index endpoint.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cat-indices.html | Elasticsearch API documentation}

@@ -75,3 +75,3 @@ */

/**
* Returns information about the master node, including the ID, bound IP address, and name. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.
* Get master node information. Get information about the master node, including the ID, bound IP address, and name. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cat-master.html | Elasticsearch API documentation}

@@ -83,3 +83,3 @@ */

/**
* Get data frame analytics jobs. Returns configuration and usage information about data frame analytics jobs. CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get data frame analytics jobs statistics API.
* Get data frame analytics jobs. Get configuration and usage information about data frame analytics jobs. IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get data frame analytics jobs statistics API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cat-dfanalytics.html | Elasticsearch API documentation}

@@ -91,3 +91,3 @@ */

/**
* Get datafeeds. Returns configuration and usage information about datafeeds. This API returns a maximum of 10,000 datafeeds. If the Elasticsearch security features are enabled, you must have `monitor_ml`, `monitor`, `manage_ml`, or `manage` cluster privileges to use this API. CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get datafeed statistics API.
* Get datafeeds. Get configuration and usage information about datafeeds. This API returns a maximum of 10,000 datafeeds. If the Elasticsearch security features are enabled, you must have `monitor_ml`, `monitor`, `manage_ml`, or `manage` cluster privileges to use this API. IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get datafeed statistics API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cat-datafeeds.html | Elasticsearch API documentation}

@@ -99,3 +99,3 @@ */

/**
* Get anomaly detection jobs. Returns configuration and usage information for anomaly detection jobs. This API returns a maximum of 10,000 jobs. If the Elasticsearch security features are enabled, you must have `monitor_ml`, `monitor`, `manage_ml`, or `manage` cluster privileges to use this API. CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get anomaly detection job statistics API.
* Get anomaly detection jobs. Get configuration and usage information for anomaly detection jobs. This API returns a maximum of 10,000 jobs. If the Elasticsearch security features are enabled, you must have `monitor_ml`, `monitor`, `manage_ml`, or `manage` cluster privileges to use this API. IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get anomaly detection job statistics API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cat-anomaly-detectors.html | Elasticsearch API documentation}

@@ -107,3 +107,3 @@ */

/**
* Get trained models. Returns configuration and usage information about inference trained models. CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get trained models statistics API.
* Get trained models. Get configuration and usage information about inference trained models. IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get trained models statistics API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cat-trained-model.html | Elasticsearch API documentation}

@@ -115,3 +115,3 @@ */

/**
* Returns information about custom node attributes. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.
* Get node attribute information. Get information about custom node attributes. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cat-nodeattrs.html | Elasticsearch API documentation}

@@ -123,3 +123,3 @@ */

/**
* Returns information about the nodes in a cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.
* Get node information. Get information about the nodes in a cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cat-nodes.html | Elasticsearch API documentation}

@@ -131,3 +131,3 @@ */

/**
* Returns cluster-level changes that have not yet been executed. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the pending cluster tasks API.
* Get pending task information. Get information about cluster-level changes that have not yet taken effect. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the pending cluster tasks API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cat-pending-tasks.html | Elasticsearch API documentation}

@@ -139,3 +139,3 @@ */

/**
* Returns a list of plugins running on each node of a cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.
* Get plugin information. Get a list of plugins running on each node of a cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cat-plugins.html | Elasticsearch API documentation}

@@ -147,3 +147,3 @@ */

/**
* Returns information about ongoing and completed shard recoveries. Shard recovery is the process of initializing a shard copy, such as restoring a primary shard from a snapshot or syncing a replica shard from a primary shard. When a shard recovery completes, the recovered shard is available for search and indexing. For data streams, the API returns information about the stream’s backing indices. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the index recovery API.
* Get shard recovery information. Get information about ongoing and completed shard recoveries. Shard recovery is the process of initializing a shard copy, such as restoring a primary shard from a snapshot or syncing a replica shard from a primary shard. When a shard recovery completes, the recovered shard is available for search and indexing. For data streams, the API returns information about the stream’s backing indices. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the index recovery API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cat-recovery.html | Elasticsearch API documentation}

@@ -155,3 +155,3 @@ */

/**
* Returns the snapshot repositories for a cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get snapshot repository API.
* Get snapshot repository information. Get a list of snapshot repositories for a cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get snapshot repository API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cat-repositories.html | Elasticsearch API documentation}

@@ -163,3 +163,3 @@ */

/**
* Returns low-level information about the Lucene segments in index shards. For data streams, the API returns information about the backing indices. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the index segments API.
* Get segment information. Get low-level information about the Lucene segments in index shards. For data streams, the API returns information about the backing indices. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the index segments API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cat-segments.html | Elasticsearch API documentation}

@@ -171,3 +171,3 @@ */

/**
* Returns information about the shards in a cluster. For data streams, the API returns information about the backing indices. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications.
* Get shard information. Get information about the shards in a cluster. For data streams, the API returns information about the backing indices. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cat-shards.html | Elasticsearch API documentation}

@@ -179,3 +179,3 @@ */

/**
* Returns information about the snapshots stored in one or more repositories. A snapshot is a backup of an index or running Elasticsearch cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get snapshot API.
* Get snapshot information. Get information about the snapshots stored in one or more repositories. A snapshot is a backup of an index or running Elasticsearch cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get snapshot API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cat-snapshots.html | Elasticsearch API documentation}

@@ -187,3 +187,3 @@ */

/**
* Returns information about tasks currently executing in the cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the task management API.
* Get task information. Get information about tasks currently running in the cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the task management API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/tasks.html | Elasticsearch API documentation}

@@ -195,3 +195,3 @@ */

/**
* Returns information about index templates in a cluster. You can use index templates to apply index settings and field mappings to new indices at creation. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get index template API.
* Get index template information. Get information about the index templates in a cluster. You can use index templates to apply index settings and field mappings to new indices at creation. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get index template API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cat-templates.html | Elasticsearch API documentation}

@@ -203,3 +203,3 @@ */

/**
* Returns thread pool statistics for each node in a cluster. Returned information includes all built-in thread pools and custom thread pools. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.
* Get thread pool statistics. Get thread pool statistics for each node in a cluster. Returned information includes all built-in thread pools and custom thread pools. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cat-thread-pool.html | Elasticsearch API documentation}

@@ -211,3 +211,3 @@ */

/**
* Get transforms. Returns configuration and usage information about transforms. CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get transform statistics API.
* Get transform information. Get configuration and usage information about transforms. CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get transform statistics API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cat-transforms.html | Elasticsearch API documentation}

@@ -214,0 +214,0 @@ */

@@ -11,3 +11,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Deletes auto-follow patterns.
* Delete auto-follow patterns. Delete a collection of cross-cluster replication auto-follow patterns.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/ccr-delete-auto-follow-pattern.html | Elasticsearch API documentation}

@@ -19,3 +19,3 @@ */

/**
* Creates a new follower index configured to follow the referenced leader index.
* Create a follower. Create a cross-cluster replication follower index that follows a specific leader index. When the API returns, the follower index exists and cross-cluster replication starts replicating operations from the leader index to the follower index.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/ccr-put-follow.html | Elasticsearch API documentation}

@@ -27,3 +27,3 @@ */

/**
* Retrieves information about all follower indices, including parameters and status for each follower index
* Get follower information. Get information about all cross-cluster replication follower indices. For example, the results include follower index names, leader index names, replication options, and whether the follower indices are active or paused.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/ccr-get-follow-info.html | Elasticsearch API documentation}

@@ -35,3 +35,3 @@ */

/**
* Retrieves follower stats. return shard-level stats about the following tasks associated with each shard for the specified indices.
* Get follower stats. Get cross-cluster replication follower stats. The API returns shard-level stats about the "following tasks" associated with each shard for the specified indices.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/ccr-get-follow-stats.html | Elasticsearch API documentation}

@@ -43,3 +43,3 @@ */

/**
* Removes the follower retention leases from the leader.
* Forget a follower. Remove the cross-cluster replication follower retention leases from the leader. A following index takes out retention leases on its leader index. These leases are used to increase the likelihood that the shards of the leader index retain the history of operations that the shards of the following index need to run replication. When a follower index is converted to a regular index by the unfollow API (either by directly calling the API or by index lifecycle management tasks), these leases are removed. However, removal of the leases can fail, for example when the remote cluster containing the leader index is unavailable. While the leases will eventually expire on their own, their extended existence can cause the leader index to hold more history than necessary and prevent index lifecycle management from performing some operations on the leader index. This API exists to enable manually removing the leases when the unfollow API is unable to do so. NOTE: This API does not stop replication by a following index. If you use this API with a follower index that is still actively following, the following index will add back retention leases on the leader. The only purpose of this API is to handle the case of failure to remove the following retention leases after the unfollow API is invoked.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/ccr-post-forget-follower.html | Elasticsearch API documentation}

@@ -51,3 +51,3 @@ */

/**
* Gets configured auto-follow patterns. Returns the specified auto-follow pattern collection.
* Get auto-follow patterns. Get cross-cluster replication auto-follow patterns.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/ccr-get-auto-follow-pattern.html | Elasticsearch API documentation}

@@ -59,3 +59,3 @@ */

/**
* Pauses an auto-follow pattern
* Pause an auto-follow pattern. Pause a cross-cluster replication auto-follow pattern. When the API returns, the auto-follow pattern is inactive. New indices that are created on the remote cluster and match the auto-follow patterns are ignored. You can resume auto-following with the resume auto-follow pattern API. When it resumes, the auto-follow pattern is active again and automatically configures follower indices for newly created indices on the remote cluster that match its patterns. Remote indices that were created while the pattern was paused will also be followed, unless they have been deleted or closed in the interim.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/ccr-pause-auto-follow-pattern.html | Elasticsearch API documentation}

@@ -67,3 +67,3 @@ */

/**
* Pauses a follower index. The follower index will not fetch any additional operations from the leader index.
* Pause a follower. Pause a cross-cluster replication follower index. The follower index will not fetch any additional operations from the leader index. You can resume following with the resume follower API. You can pause and resume a follower index to change the configuration of the following task.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/ccr-post-pause-follow.html | Elasticsearch API documentation}

@@ -75,3 +75,3 @@ */

/**
* Creates a new named collection of auto-follow patterns against a specified remote cluster. Newly created indices on the remote cluster matching any of the specified patterns will be automatically configured as follower indices.
* Create or update auto-follow patterns. Create a collection of cross-cluster replication auto-follow patterns for a remote cluster. Newly created indices on the remote cluster that match any of the patterns are automatically configured as follower indices. Indices on the remote cluster that were created before the auto-follow pattern was created will not be auto-followed even if they match the pattern. This API can also be used to update auto-follow patterns. NOTE: Follower indices that were configured automatically before updating an auto-follow pattern will remain unchanged even if they do not match against the new patterns.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/ccr-put-auto-follow-pattern.html | Elasticsearch API documentation}

@@ -83,3 +83,3 @@ */

/**
* Resumes an auto-follow pattern that has been paused
* Resume an auto-follow pattern. Resume a cross-cluster replication auto-follow pattern that was paused. The auto-follow pattern will resume configuring following indices for newly created indices that match its patterns on the remote cluster. Remote indices created while the pattern was paused will also be followed unless they have been deleted or closed in the interim.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/ccr-resume-auto-follow-pattern.html | Elasticsearch API documentation}

@@ -91,3 +91,3 @@ */

/**
* Resumes a follower index that has been paused
* Resume a follower. Resume a cross-cluster replication follower index that was paused. The follower index could have been paused with the pause follower API. Alternatively it could be paused due to replication that cannot be retried due to failures during following tasks. When this API returns, the follower index will resume fetching operations from the leader index.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/ccr-post-resume-follow.html | Elasticsearch API documentation}

@@ -99,3 +99,3 @@ */

/**
* Gets all stats related to cross-cluster replication.
* Get cross-cluster replication stats. This API returns stats about auto-following and the same shard-level stats as the get follower stats API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/ccr-get-stats.html | Elasticsearch API documentation}

@@ -107,3 +107,3 @@ */

/**
* Stops the following task associated with a follower index and removes index metadata and settings associated with cross-cluster replication.
* Unfollow an index. Convert a cross-cluster replication follower index to a regular index. The API stops the following task associated with a follower index and removes index metadata and settings associated with cross-cluster replication. The follower index must be paused and closed before you call the unfollow API. NOTE: Currently cross-cluster replication does not support converting an existing regular index to a follower index. Converting a follower index to a regular index is an irreversible operation.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/ccr-post-unfollow.html | Elasticsearch API documentation}

@@ -110,0 +110,0 @@ */

@@ -11,3 +11,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Provides explanations for shard allocations in the cluster.
* Explain the shard allocations. Get explanations for shard allocations in the cluster. For unassigned shards, it provides an explanation for why the shard is unassigned. For assigned shards, it provides an explanation for why the shard is remaining on its current node and has not moved or rebalanced to another node. This API can be very useful when attempting to diagnose why a shard is unassigned or why a shard continues to remain on its current node when you might expect otherwise.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cluster-allocation-explain.html | Elasticsearch API documentation}

@@ -19,3 +19,3 @@ */

/**
* Delete component templates. Deletes component templates. Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases.
* Delete component templates. Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/indices-component-template.html | Elasticsearch API documentation}

@@ -27,3 +27,3 @@ */

/**
* Clears cluster voting config exclusions.
* Clear cluster voting config exclusions. Remove master-eligible nodes from the voting configuration exclusion list.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/voting-config-exclusions.html | Elasticsearch API documentation}

@@ -42,3 +42,3 @@ */

/**
* Get component templates. Retrieves information about component templates.
* Get component templates. Get information about component templates.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/indices-component-template.html | Elasticsearch API documentation}

@@ -50,3 +50,3 @@ */

/**
* Returns cluster-wide settings. By default, it returns only settings that have been explicitly defined.
* Get cluster-wide settings. By default, it returns only settings that have been explicitly defined.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cluster-get-settings.html | Elasticsearch API documentation}

@@ -58,3 +58,3 @@ */

/**
* The cluster health API returns a simple status on the health of the cluster. You can also use the API to get the health status of only specified data streams and indices. For data streams, the API retrieves the health status of the stream’s backing indices. The cluster health status is: green, yellow or red. On the shard level, a red status indicates that the specific shard is not allocated in the cluster, yellow means that the primary shard is allocated but replicas are not, and green means that all shards are allocated. The index level status is controlled by the worst shard status. The cluster status is controlled by the worst index status.
* Get the cluster health status. You can also use the API to get the health status of only specified data streams and indices. For data streams, the API retrieves the health status of the stream’s backing indices. The cluster health status is: green, yellow or red. On the shard level, a red status indicates that the specific shard is not allocated in the cluster. Yellow means that the primary shard is allocated but replicas are not. Green means that all shards are allocated. The index level status is controlled by the worst shard status. One of the main benefits of the API is the ability to wait until the cluster reaches a certain high watermark health level. The cluster status is controlled by the worst index status.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cluster-health.html | Elasticsearch API documentation}

@@ -73,3 +73,3 @@ */

/**
* Returns cluster-level changes (such as create index, update mapping, allocate or fail shard) that have not yet been executed. NOTE: This API returns a list of any pending updates to the cluster state. These are distinct from the tasks reported by the Task Management API which include periodic tasks and tasks initiated by the user, such as node stats, search queries, or create index requests. However, if a user-initiated task such as a create index command causes a cluster state update, the activity of this task might be reported by both task api and pending cluster tasks API.
* Get the pending cluster tasks. Get information about cluster-level changes (such as create index, update mapping, allocate or fail shard) that have not yet taken effect. NOTE: This API returns a list of any pending updates to the cluster state. These are distinct from the tasks reported by the task management API which include periodic tasks and tasks initiated by the user, such as node stats, search queries, or create index requests. However, if a user-initiated task such as a create index command causes a cluster state update, the activity of this task might be reported by both task api and pending cluster tasks API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cluster-pending.html | Elasticsearch API documentation}

@@ -81,3 +81,3 @@ */

/**
* Updates the cluster voting config exclusions by node ids or node names.
* Update voting configuration exclusions. Update the cluster voting config exclusions by node IDs or node names. By default, if there are more than three master-eligible nodes in the cluster and you remove fewer than half of the master-eligible nodes in the cluster at once, the voting configuration automatically shrinks. If you want to shrink the voting configuration to contain fewer than three nodes or to remove half or more of the master-eligible nodes in the cluster at once, use this API to remove departing nodes from the voting configuration manually. The API adds an entry for each specified node to the cluster’s voting configuration exclusions list. It then waits until the cluster has reconfigured its voting configuration to exclude the specified nodes. Clusters should have no voting configuration exclusions in normal operation. Once the excluded nodes have stopped, clear the voting configuration exclusions with `DELETE /_cluster/voting_config_exclusions`. This API waits for the nodes to be fully removed from the cluster before it returns. If your cluster has voting configuration exclusions for nodes that you no longer intend to remove, use `DELETE /_cluster/voting_config_exclusions?wait_for_removal=false` to clear the voting configuration exclusions without waiting for the nodes to leave the cluster. A response to `POST /_cluster/voting_config_exclusions` with an HTTP status code of 200 OK guarantees that the node has been removed from the voting configuration and will not be reinstated until the voting configuration exclusions are cleared by calling `DELETE /_cluster/voting_config_exclusions`. If the call to `POST /_cluster/voting_config_exclusions` fails or returns a response with an HTTP status code other than 200 OK then the node may not have been removed from the voting configuration. In that case, you may safely retry the call. NOTE: Voting exclusions are required only when you remove at least half of the master-eligible nodes from a cluster in a short time period. They are not required when removing master-ineligible nodes or when removing fewer than half of the master-eligible nodes.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/voting-config-exclusions.html | Elasticsearch API documentation}

@@ -89,3 +89,3 @@ */

/**
* Create or update a component template. Creates or updates a component template. Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases. An index template can be composed of multiple component templates. To use a component template, specify it in an index template’s `composed_of` list. Component templates are only applied to new data streams and indices as part of a matching index template. Settings and mappings specified directly in the index template or the create index request override any settings or mappings specified in a component template. Component templates are only used during index creation. For data streams, this includes data stream creation and the creation of a stream’s backing indices. Changes to component templates do not affect existing indices, including a stream’s backing indices. You can use C-style `/* *\/` block comments in component templates. You can include comments anywhere in the request body except before the opening curly bracket.
* Create or update a component template. Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases. An index template can be composed of multiple component templates. To use a component template, specify it in an index template’s `composed_of` list. Component templates are only applied to new data streams and indices as part of a matching index template. Settings and mappings specified directly in the index template or the create index request override any settings or mappings specified in a component template. Component templates are only used during index creation. For data streams, this includes data stream creation and the creation of a stream’s backing indices. Changes to component templates do not affect existing indices, including a stream’s backing indices. You can use C-style `/* *\/` block comments in component templates. You can include comments anywhere in the request body except before the opening curly bracket. **Applying component templates** You cannot directly apply a component template to a data stream or index. To be applied, a component template must be included in an index template's `composed_of` list.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/indices-component-template.html | Elasticsearch API documentation}

@@ -97,3 +97,3 @@ */

/**
* Updates the cluster settings.
* Update the cluster settings. Configure and update dynamic settings on a running cluster. You can also configure dynamic settings locally on an unstarted or shut down node in `elasticsearch.yml`. Updates made with this API can be persistent, which apply across cluster restarts, or transient, which reset after a cluster restart. You can also reset transient or persistent settings by assigning them a null value. If you configure the same setting using multiple methods, Elasticsearch applies the settings in following order of precedence: 1) Transient setting; 2) Persistent setting; 3) `elasticsearch.yml` setting; 4) Default setting value. For example, you can apply a transient setting to override a persistent setting or `elasticsearch.yml` setting. However, a change to an `elasticsearch.yml` setting will not override a defined transient or persistent setting. TIP: In Elastic Cloud, use the user settings feature to configure all cluster settings. This method automatically rejects unsafe settings that could break your cluster. If you run Elasticsearch on your own hardware, use this API to configure dynamic cluster settings. Only use `elasticsearch.yml` for static cluster settings and node settings. The API doesn’t require a restart and ensures a setting’s value is the same on all nodes. WARNING: Transient cluster settings are no longer recommended. Use persistent cluster settings instead. If a cluster becomes unstable, transient settings can clear unexpectedly, resulting in a potentially undesired cluster configuration.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cluster-update-settings.html | Elasticsearch API documentation}

@@ -105,3 +105,3 @@ */

/**
* The cluster remote info API allows you to retrieve all of the configured remote cluster information. It returns connection and endpoint information keyed by the configured remote cluster alias.
* Get remote cluster information. Get all of the configured remote cluster information. This API returns connection and endpoint information keyed by the configured remote cluster alias.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cluster-remote-info.html | Elasticsearch API documentation}

@@ -113,3 +113,3 @@ */

/**
* Allows to manually change the allocation of individual shards in the cluster.
* Reroute the cluster. Manually change the allocation of individual shards in the cluster. For example, a shard can be moved from one node to another explicitly, an allocation can be canceled, and an unassigned shard can be explicitly allocated to a specific node. It is important to note that after processing any reroute commands Elasticsearch will perform rebalancing as normal (respecting the values of settings such as `cluster.routing.rebalance.enable`) in order to remain in a balanced state. For example, if the requested allocation includes moving a shard from node1 to node2 then this may cause a shard to be moved from node2 back to node1 to even things out. The cluster can be set to disable allocations using the `cluster.routing.allocation.enable` setting. If allocations are disabled then the only allocations that will be performed are explicit ones given using the reroute command, and consequent allocations due to rebalancing. The cluster will attempt to allocate a shard a maximum of `index.allocation.max_retries` times in a row (defaults to `5`), before giving up and leaving the shard unallocated. This scenario can be caused by structural problems such as having an analyzer which refers to a stopwords file which doesn’t exist on all nodes. Once the problem has been corrected, allocation can be manually retried by calling the reroute API with the `?retry_failed` URI query parameter, which will attempt a single retry round for these shards.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cluster-reroute.html | Elasticsearch API documentation}

@@ -121,3 +121,3 @@ */

/**
* Returns a comprehensive information about the state of the cluster.
* Get the cluster state. Get comprehensive information about the state of the cluster. The cluster state is an internal data structure which keeps track of a variety of information needed by every node, including the identity and attributes of the other nodes in the cluster; cluster-wide settings; index metadata, including the mapping and settings for each index; the location and status of every shard copy in the cluster. The elected master node ensures that every node in the cluster has a copy of the same cluster state. This API lets you retrieve a representation of this internal state for debugging or diagnostic purposes. You may need to consult the Elasticsearch source code to determine the precise meaning of the response. By default the API will route requests to the elected master node since this node is the authoritative source of cluster states. You can also retrieve the cluster state held on the node handling the API request by adding the `?local=true` query parameter. Elasticsearch may need to expend significant effort to compute a response to this API in larger clusters, and the response may comprise a very large quantity of data. If you use this API repeatedly, your cluster may become unstable. WARNING: The response is a representation of an internal data structure. Its format is not subject to the same compatibility guarantees as other more stable APIs and may change from version to version. Do not query this API using external monitoring tools. Instead, obtain the information you require using other more stable cluster APIs.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cluster-state.html | Elasticsearch API documentation}

@@ -129,3 +129,3 @@ */

/**
* Returns cluster statistics. It returns basic index metrics (shard numbers, store size, memory usage) and information about the current nodes that form the cluster (number, roles, os, jvm versions, memory usage, cpu and installed plugins).
* Get cluster statistics. Get basic index metrics (shard numbers, store size, memory usage) and information about the current nodes that form the cluster (number, roles, os, jvm versions, memory usage, cpu and installed plugins).
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cluster-stats.html | Elasticsearch API documentation}

@@ -132,0 +132,0 @@ */

@@ -91,15 +91,15 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Checks in a connector sync job (refreshes 'last_seen').
* Check in a connector sync job. Check in a connector sync job and set the `last_seen` field to the current time before updating it in the internal index. To sync data using self-managed connectors, you need to deploy the Elastic connector service on your own infrastructure. This service runs automatically on Elastic Cloud for Elastic managed connectors.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/check-in-connector-sync-job-api.html | Elasticsearch API documentation}
*/
syncJobCheckIn(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithOutMeta): Promise<T.TODO>;
syncJobCheckIn(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.TODO, unknown>>;
syncJobCheckIn(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptions): Promise<T.TODO>;
syncJobCheckIn(this: That, params: T.ConnectorSyncJobCheckInRequest | TB.ConnectorSyncJobCheckInRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.ConnectorSyncJobCheckInResponse>;
syncJobCheckIn(this: That, params: T.ConnectorSyncJobCheckInRequest | TB.ConnectorSyncJobCheckInRequest, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.ConnectorSyncJobCheckInResponse, unknown>>;
syncJobCheckIn(this: That, params: T.ConnectorSyncJobCheckInRequest | TB.ConnectorSyncJobCheckInRequest, options?: TransportRequestOptions): Promise<T.ConnectorSyncJobCheckInResponse>;
/**
* Claims a connector sync job.
* Claim a connector sync job. This action updates the job status to `in_progress` and sets the `last_seen` and `started_at` timestamps to the current time. Additionally, it can set the `sync_cursor` property for the sync job. This API is not intended for direct connector management by users. It supports the implementation of services that utilize the connector protocol to communicate with Elasticsearch. To sync data using self-managed connectors, you need to deploy the Elastic connector service on your own infrastructure. This service runs automatically on Elastic Cloud for Elastic managed connectors.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/claim-connector-sync-job-api.html | Elasticsearch API documentation}
*/
syncJobClaim(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithOutMeta): Promise<T.TODO>;
syncJobClaim(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.TODO, unknown>>;
syncJobClaim(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptions): Promise<T.TODO>;
syncJobClaim(this: That, params: T.ConnectorSyncJobClaimRequest | TB.ConnectorSyncJobClaimRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.ConnectorSyncJobClaimResponse>;
syncJobClaim(this: That, params: T.ConnectorSyncJobClaimRequest | TB.ConnectorSyncJobClaimRequest, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.ConnectorSyncJobClaimResponse, unknown>>;
syncJobClaim(this: That, params: T.ConnectorSyncJobClaimRequest | TB.ConnectorSyncJobClaimRequest, options?: TransportRequestOptions): Promise<T.ConnectorSyncJobClaimResponse>;
/**

@@ -113,8 +113,8 @@ * Delete a connector sync job. Remove a connector sync job and its associated data. This is a destructive action that is not recoverable.

/**
* Sets an error for a connector sync job.
* Set a connector sync job error. Set the `error` field for a connector sync job and set its `status` to `error`. To sync data using self-managed connectors, you need to deploy the Elastic connector service on your own infrastructure. This service runs automatically on Elastic Cloud for Elastic managed connectors.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/set-connector-sync-job-error-api.html | Elasticsearch API documentation}
*/
syncJobError(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithOutMeta): Promise<T.TODO>;
syncJobError(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.TODO, unknown>>;
syncJobError(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptions): Promise<T.TODO>;
syncJobError(this: That, params: T.ConnectorSyncJobErrorRequest | TB.ConnectorSyncJobErrorRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.ConnectorSyncJobErrorResponse>;
syncJobError(this: That, params: T.ConnectorSyncJobErrorRequest | TB.ConnectorSyncJobErrorRequest, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.ConnectorSyncJobErrorResponse, unknown>>;
syncJobError(this: That, params: T.ConnectorSyncJobErrorRequest | TB.ConnectorSyncJobErrorRequest, options?: TransportRequestOptions): Promise<T.ConnectorSyncJobErrorResponse>;
/**

@@ -142,8 +142,8 @@ * Get a connector sync job.

/**
* Updates the stats fields in the connector sync job document.
* Set the connector sync job stats. Stats include: `deleted_document_count`, `indexed_document_count`, `indexed_document_volume`, and `total_document_count`. You can also update `last_seen`. This API is mainly used by the connector service for updating sync job information. To sync data using self-managed connectors, you need to deploy the Elastic connector service on your own infrastructure. This service runs automatically on Elastic Cloud for Elastic managed connectors.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/set-connector-sync-job-stats-api.html | Elasticsearch API documentation}
*/
syncJobUpdateStats(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithOutMeta): Promise<T.TODO>;
syncJobUpdateStats(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.TODO, unknown>>;
syncJobUpdateStats(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptions): Promise<T.TODO>;
syncJobUpdateStats(this: That, params: T.ConnectorSyncJobUpdateStatsRequest | TB.ConnectorSyncJobUpdateStatsRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.ConnectorSyncJobUpdateStatsResponse>;
syncJobUpdateStats(this: That, params: T.ConnectorSyncJobUpdateStatsRequest | TB.ConnectorSyncJobUpdateStatsRequest, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.ConnectorSyncJobUpdateStatsResponse, unknown>>;
syncJobUpdateStats(this: That, params: T.ConnectorSyncJobUpdateStatsRequest | TB.ConnectorSyncJobUpdateStatsRequest, options?: TransportRequestOptions): Promise<T.ConnectorSyncJobUpdateStatsResponse>;
/**

@@ -178,8 +178,8 @@ * Activate the connector draft filter. Activates the valid draft filtering for a connector.

/**
* Updates the connector features in the connector document.
* Update the connector features. Update the connector features in the connector document. This API can be used to control the following aspects of a connector: * document-level security * incremental syncs * advanced sync rules * basic sync rules Normally, the running connector service automatically manages these features. However, you can use this API to override the default behavior. To sync data using self-managed connectors, you need to deploy the Elastic connector service on your own infrastructure. This service runs automatically on Elastic Cloud for Elastic managed connectors.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/update-connector-features-api.html | Elasticsearch API documentation}
*/
updateFeatures(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithOutMeta): Promise<T.TODO>;
updateFeatures(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.TODO, unknown>>;
updateFeatures(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptions): Promise<T.TODO>;
updateFeatures(this: That, params: T.ConnectorUpdateFeaturesRequest | TB.ConnectorUpdateFeaturesRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.ConnectorUpdateFeaturesResponse>;
updateFeatures(this: That, params: T.ConnectorUpdateFeaturesRequest | TB.ConnectorUpdateFeaturesRequest, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.ConnectorUpdateFeaturesResponse, unknown>>;
updateFeatures(this: That, params: T.ConnectorUpdateFeaturesRequest | TB.ConnectorUpdateFeaturesRequest, options?: TransportRequestOptions): Promise<T.ConnectorUpdateFeaturesResponse>;
/**

@@ -186,0 +186,0 @@ * Update the connector filtering. Update the draft filtering configuration of a connector and marks the draft validation state as edited. The filtering draft is activated once validated by the running Elastic connector service. The filtering property is used to configure sync rules (both basic and advanced) for a connector.

@@ -355,3 +355,2 @@ "use strict";

const body = undefined;
params = params !== null && params !== void 0 ? params : {};
for (const key in params) {

@@ -362,2 +361,3 @@ if (acceptedPath.includes(key)) {

else if (key !== 'body') {
// @ts-expect-error
querystring[key] = params[key];

@@ -378,10 +378,24 @@ }

const acceptedPath = ['connector_sync_job_id'];
const acceptedBody = ['sync_cursor', 'worker_hostname'];
const querystring = {};
const body = undefined;
params = params !== null && params !== void 0 ? params : {};
// @ts-expect-error
const userBody = params === null || params === void 0 ? void 0 : params.body;
let body;
if (typeof userBody === 'string') {
body = userBody;
}
else {
body = userBody != null ? { ...userBody } : undefined;
}
for (const key in params) {
if (acceptedPath.includes(key)) {
if (acceptedBody.includes(key)) {
body = body !== null && body !== void 0 ? body : {};
// @ts-expect-error
body[key] = params[key];
}
else if (acceptedPath.includes(key)) {
continue;
}
else if (key !== 'body') {
// @ts-expect-error
querystring[key] = params[key];

@@ -425,10 +439,24 @@ }

const acceptedPath = ['connector_sync_job_id'];
const acceptedBody = ['error'];
const querystring = {};
const body = undefined;
params = params !== null && params !== void 0 ? params : {};
// @ts-expect-error
const userBody = params === null || params === void 0 ? void 0 : params.body;
let body;
if (typeof userBody === 'string') {
body = userBody;
}
else {
body = userBody != null ? { ...userBody } : undefined;
}
for (const key in params) {
if (acceptedPath.includes(key)) {
if (acceptedBody.includes(key)) {
body = body !== null && body !== void 0 ? body : {};
// @ts-expect-error
body[key] = params[key];
}
else if (acceptedPath.includes(key)) {
continue;
}
else if (key !== 'body') {
// @ts-expect-error
querystring[key] = params[key];

@@ -527,10 +555,24 @@ }

const acceptedPath = ['connector_sync_job_id'];
const acceptedBody = ['deleted_document_count', 'indexed_document_count', 'indexed_document_volume', 'last_seen', 'metadata', 'total_document_count'];
const querystring = {};
const body = undefined;
params = params !== null && params !== void 0 ? params : {};
// @ts-expect-error
const userBody = params === null || params === void 0 ? void 0 : params.body;
let body;
if (typeof userBody === 'string') {
body = userBody;
}
else {
body = userBody != null ? { ...userBody } : undefined;
}
for (const key in params) {
if (acceptedPath.includes(key)) {
if (acceptedBody.includes(key)) {
body = body !== null && body !== void 0 ? body : {};
// @ts-expect-error
body[key] = params[key];
}
else if (acceptedPath.includes(key)) {
continue;
}
else if (key !== 'body') {
// @ts-expect-error
querystring[key] = params[key];

@@ -685,10 +727,24 @@ }

const acceptedPath = ['connector_id'];
const acceptedBody = ['features'];
const querystring = {};
const body = undefined;
params = params !== null && params !== void 0 ? params : {};
// @ts-expect-error
const userBody = params === null || params === void 0 ? void 0 : params.body;
let body;
if (typeof userBody === 'string') {
body = userBody;
}
else {
body = userBody != null ? { ...userBody } : undefined;
}
for (const key in params) {
if (acceptedPath.includes(key)) {
if (acceptedBody.includes(key)) {
body = body !== null && body !== void 0 ? body : {};
// @ts-expect-error
body[key] = params[key];
}
else if (acceptedPath.includes(key)) {
continue;
}
else if (key !== 'body') {
// @ts-expect-error
querystring[key] = params[key];

@@ -695,0 +751,0 @@ }

@@ -8,3 +8,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Count search results. Get the number of documents matching a query.
* Count search results. Get the number of documents matching a query. The query can be provided either by using a simple query string as a parameter, or by defining Query DSL within the request body. The query is optional. When no query is provided, the API uses `match_all` to count all the documents. The count API supports multi-target syntax. You can run a single count API search across multiple data streams and indices. The operation is broadcast across all shards. For each shard ID group, a replica is chosen and the search is run against it. This means that replicas increase the scalability of the count.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/search-count.html | Elasticsearch API documentation}

@@ -11,0 +11,0 @@ */

@@ -8,3 +8,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Index a document. Adds a JSON document to the specified data stream or index and makes it searchable. If the target is an index and the document already exists, the request updates the document and increments its version.
* Create a new document in the index. You can index a new JSON document with the `/<target>/_doc/` or `/<target>/_create/<_id>` APIs Using `_create` guarantees that the document is indexed only if it does not already exist. It returns a 409 response when a document with a same ID already exists in the index. To update an existing document, you must use the `/<target>/_doc/` API. If the Elasticsearch security features are enabled, you must have the following index privileges for the target data stream, index, or index alias: * To add a document using the `PUT /<target>/_create/<_id>` or `POST /<target>/_create/<_id>` request formats, you must have the `create_doc`, `create`, `index`, or `write` index privilege. * To automatically create a data stream or index with this API request, you must have the `auto_configure`, `create_index`, or `manage` index privilege. Automatic data stream creation requires a matching index template with data stream enabled. **Automatically create data streams and indices** If the request's target doesn't exist and matches an index template with a `data_stream` definition, the index operation automatically creates the data stream. If the target doesn't exist and doesn't match a data stream template, the operation automatically creates the index and applies any matching index templates. NOTE: Elasticsearch includes several built-in index templates. To avoid naming collisions with these templates, refer to index pattern documentation. If no mapping exists, the index operation creates a dynamic mapping. By default, new fields and objects are automatically added to the mapping if needed. Automatic index creation is controlled by the `action.auto_create_index` setting. If it is `true`, any index can be created automatically. You can modify this setting to explicitly allow or block automatic creation of indices that match specified patterns or set it to `false` to turn off automatic index creation entirely. Specify a comma-separated list of patterns you want to allow or prefix each pattern with `+` or `-` to indicate whether it should be allowed or blocked. When a list is specified, the default behaviour is to disallow. NOTE: The `action.auto_create_index` setting affects the automatic creation of indices only. It does not affect the creation of data streams. **Routing** By default, shard placement — or routing — is controlled by using a hash of the document's ID value. For more explicit control, the value fed into the hash function used by the router can be directly specified on a per-operation basis using the `routing` parameter. When setting up explicit mapping, you can also use the `_routing` field to direct the index operation to extract the routing value from the document itself. This does come at the (very minimal) cost of an additional document parsing pass. If the `_routing` mapping is defined and set to be required, the index operation will fail if no routing value is provided or extracted. NOTE: Data streams do not support custom routing unless they were created with the `allow_custom_routing` setting enabled in the template. ** Distributed** The index operation is directed to the primary shard based on its route and performed on the actual node containing this shard. After the primary shard completes the operation, if needed, the update is distributed to applicable replicas. **Active shards** To improve the resiliency of writes to the system, indexing operations can be configured to wait for a certain number of active shard copies before proceeding with the operation. If the requisite number of active shard copies are not available, then the write operation must wait and retry, until either the requisite shard copies have started or a timeout occurs. By default, write operations only wait for the primary shards to be active before proceeding (that is to say `wait_for_active_shards` is `1`). This default can be overridden in the index settings dynamically by setting `index.write.wait_for_active_shards`. To alter this behavior per operation, use the `wait_for_active_shards request` parameter. Valid values are all or any positive integer up to the total number of configured copies per shard in the index (which is `number_of_replicas`+1). Specifying a negative value or a number greater than the number of shard copies will throw an error. For example, suppose you have a cluster of three nodes, A, B, and C and you create an index index with the number of replicas set to 3 (resulting in 4 shard copies, one more copy than there are nodes). If you attempt an indexing operation, by default the operation will only ensure the primary copy of each shard is available before proceeding. This means that even if B and C went down and A hosted the primary shard copies, the indexing operation would still proceed with only one copy of the data. If `wait_for_active_shards` is set on the request to `3` (and all three nodes are up), the indexing operation will require 3 active shard copies before proceeding. This requirement should be met because there are 3 active nodes in the cluster, each one holding a copy of the shard. However, if you set `wait_for_active_shards` to `all` (or to `4`, which is the same in this situation), the indexing operation will not proceed as you do not have all 4 copies of each shard active in the index. The operation will timeout unless a new node is brought up in the cluster to host the fourth copy of the shard. It is important to note that this setting greatly reduces the chances of the write operation not writing to the requisite number of shard copies, but it does not completely eliminate the possibility, because this check occurs before the write operation starts. After the write operation is underway, it is still possible for replication to fail on any number of shard copies but still succeed on the primary. The `_shards` section of the API response reveals the number of shard copies on which replication succeeded and failed.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/docs-index_.html | Elasticsearch API documentation}

@@ -11,0 +11,0 @@ */

@@ -12,3 +12,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

* Delete a dangling index. If Elasticsearch encounters index data that is absent from the current cluster state, those indices are considered to be dangling. For example, this can happen if you delete more than `cluster.indices.tombstones.size` indices while an Elasticsearch node is offline.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/modules-gateway-dangling-indices.html | Elasticsearch API documentation}
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/dangling-index-delete.html | Elasticsearch API documentation}
*/

@@ -20,3 +20,3 @@ deleteDanglingIndex(this: That, params: T.DanglingIndicesDeleteDanglingIndexRequest | TB.DanglingIndicesDeleteDanglingIndexRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.DanglingIndicesDeleteDanglingIndexResponse>;

* Import a dangling index. If Elasticsearch encounters index data that is absent from the current cluster state, those indices are considered to be dangling. For example, this can happen if you delete more than `cluster.indices.tombstones.size` indices while an Elasticsearch node is offline.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/modules-gateway-dangling-indices.html | Elasticsearch API documentation}
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/dangling-index-import.html | Elasticsearch API documentation}
*/

@@ -28,3 +28,3 @@ importDanglingIndex(this: That, params: T.DanglingIndicesImportDanglingIndexRequest | TB.DanglingIndicesImportDanglingIndexRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.DanglingIndicesImportDanglingIndexResponse>;

* Get the dangling indices. If Elasticsearch encounters index data that is absent from the current cluster state, those indices are considered to be dangling. For example, this can happen if you delete more than `cluster.indices.tombstones.size` indices while an Elasticsearch node is offline. Use this API to list dangling indices, which you can then import or delete.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/modules-gateway-dangling-indices.html | Elasticsearch API documentation}
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/dangling-indices-list.html | Elasticsearch API documentation}
*/

@@ -31,0 +31,0 @@ listDanglingIndices(this: That, params?: T.DanglingIndicesListDanglingIndicesRequest | TB.DanglingIndicesListDanglingIndicesRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.DanglingIndicesListDanglingIndicesResponse>;

@@ -9,3 +9,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

* Throttle a delete by query operation. Change the number of requests per second for a particular delete by query operation. Rethrottling that speeds up the query takes effect immediately but rethrotting that slows down the query takes effect after completing the current batch to prevent scroll timeouts.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/docs-delete-by-query.html | Elasticsearch API documentation}
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/docs-delete-by-query.html#docs-delete-by-query-rethrottle | Elasticsearch API documentation}
*/

@@ -12,0 +12,0 @@ export default function DeleteByQueryRethrottleApi(this: That, params: T.DeleteByQueryRethrottleRequest | TB.DeleteByQueryRethrottleRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.DeleteByQueryRethrottleResponse>;

@@ -8,3 +8,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Delete documents. Deletes documents that match the specified query.
* Delete documents. Deletes documents that match the specified query. If the Elasticsearch security features are enabled, you must have the following index privileges for the target data stream, index, or alias: * `read` * `delete` or `write` You can specify the query criteria in the request URI or the request body using the same syntax as the search API. When you submit a delete by query request, Elasticsearch gets a snapshot of the data stream or index when it begins processing the request and deletes matching documents using internal versioning. If a document changes between the time that the snapshot is taken and the delete operation is processed, it results in a version conflict and the delete operation fails. NOTE: Documents with a version equal to 0 cannot be deleted using delete by query because internal versioning does not support 0 as a valid version number. While processing a delete by query request, Elasticsearch performs multiple search requests sequentially to find all of the matching documents to delete. A bulk delete request is performed for each batch of matching documents. If a search or bulk request is rejected, the requests are retried up to 10 times, with exponential back off. If the maximum retry limit is reached, processing halts and all failed requests are returned in the response. Any delete requests that completed successfully still stick, they are not rolled back. You can opt to count version conflicts instead of halting and returning by setting `conflicts` to `proceed`. Note that if you opt to count version conflicts the operation could attempt to delete more documents from the source than `max_docs` until it has successfully deleted `max_docs documents`, or it has gone through every document in the source query. **Throttling delete requests** To control the rate at which delete by query issues batches of delete operations, you can set `requests_per_second` to any positive decimal number. This pads each batch with a wait time to throttle the rate. Set `requests_per_second` to `-1` to disable throttling. Throttling uses a wait time between batches so that the internal scroll requests can be given a timeout that takes the request padding into account. The padding time is the difference between the batch size divided by the `requests_per_second` and the time spent writing. By default the batch size is `1000`, so if `requests_per_second` is set to `500`: ``` target_time = 1000 / 500 per second = 2 seconds wait_time = target_time - write_time = 2 seconds - .5 seconds = 1.5 seconds ``` Since the batch is issued as a single `_bulk` request, large batch sizes cause Elasticsearch to create many requests and wait before starting the next set. This is "bursty" instead of "smooth". **Slicing** Delete by query supports sliced scroll to parallelize the delete process. This can improve efficiency and provide a convenient way to break the request down into smaller parts. Setting `slices` to `auto` lets Elasticsearch choose the number of slices to use. This setting will use one slice per shard, up to a certain limit. If there are multiple source data streams or indices, it will choose the number of slices based on the index or backing index with the smallest number of shards. Adding slices to the delete by query operation creates sub-requests which means it has some quirks: * You can see these requests in the tasks APIs. These sub-requests are "child" tasks of the task for the request with slices. * Fetching the status of the task for the request with slices only contains the status of completed slices. * These sub-requests are individually addressable for things like cancellation and rethrottling. * Rethrottling the request with `slices` will rethrottle the unfinished sub-request proportionally. * Canceling the request with `slices` will cancel each sub-request. * Due to the nature of `slices` each sub-request won't get a perfectly even portion of the documents. All documents will be addressed, but some slices may be larger than others. Expect larger slices to have a more even distribution. * Parameters like `requests_per_second` and `max_docs` on a request with `slices` are distributed proportionally to each sub-request. Combine that with the earlier point about distribution being uneven and you should conclude that using `max_docs` with `slices` might not result in exactly `max_docs` documents being deleted. * Each sub-request gets a slightly different snapshot of the source data stream or index though these are all taken at approximately the same time. If you're slicing manually or otherwise tuning automatic slicing, keep in mind that: * Query performance is most efficient when the number of slices is equal to the number of shards in the index or backing index. If that number is large (for example, 500), choose a lower number as too many `slices` hurts performance. Setting `slices` higher than the number of shards generally does not improve efficiency and adds overhead. * Delete performance scales linearly across available resources with the number of slices. Whether query or delete performance dominates the runtime depends on the documents being reindexed and cluster resources. **Cancel a delete by query operation** Any delete by query can be canceled using the task cancel API. For example: ``` POST _tasks/r1A2WoRbTwKZ516z6NEs5A:36619/_cancel ``` The task ID can be found by using the get tasks API. Cancellation should happen quickly but might take a few seconds. The get task status API will continue to list the delete by query task until this task checks that it has been cancelled and terminates itself.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/docs-delete-by-query.html | Elasticsearch API documentation}

@@ -11,0 +11,0 @@ */

@@ -9,3 +9,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

* Delete a script or search template. Deletes a stored script or search template.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/modules-scripting.html | Elasticsearch API documentation}
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/delete-stored-script-api.html | Elasticsearch API documentation}
*/

@@ -12,0 +12,0 @@ export default function DeleteScriptApi(this: That, params: T.DeleteScriptRequest | TB.DeleteScriptRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.DeleteScriptResponse>;

@@ -8,3 +8,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Delete a document. Removes a JSON document from the specified index.
* Delete a document. Remove a JSON document from the specified index. NOTE: You cannot send deletion requests directly to a data stream. To delete a document in a data stream, you must target the backing index containing the document. **Optimistic concurrency control** Delete operations can be made conditional and only be performed if the last modification to the document was assigned the sequence number and primary term specified by the `if_seq_no` and `if_primary_term` parameters. If a mismatch is detected, the operation will result in a `VersionConflictException` and a status code of `409`. **Versioning** Each document indexed is versioned. When deleting a document, the version can be specified to make sure the relevant document you are trying to delete is actually being deleted and it has not changed in the meantime. Every write operation run on a document, deletes included, causes its version to be incremented. The version number of a deleted document remains available for a short time after deletion to allow for control of concurrent operations. The length of time for which a deleted document's version remains available is determined by the `index.gc_deletes` index setting. **Routing** If routing is used during indexing, the routing value also needs to be specified to delete a document. If the `_routing` mapping is set to `required` and no routing value is specified, the delete API throws a `RoutingMissingException` and rejects the request. For example: ``` DELETE /my-index-000001/_doc/1?routing=shard-1 ``` This request deletes the document with ID 1, but it is routed based on the user. The document is not deleted if the correct routing is not specified. **Distributed** The delete operation gets hashed into a specific shard ID. It then gets redirected into the primary shard within that ID group and replicated (if needed) to shard replicas within that ID group.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/docs-delete.html | Elasticsearch API documentation}

@@ -11,0 +11,0 @@ */

@@ -11,15 +11,22 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Executes an ESQL request asynchronously
* Run an async ES|QL query. Asynchronously run an ES|QL (Elasticsearch query language) query, monitor its progress, and retrieve results when they become available. The API accepts the same parameters and request body as the synchronous query API, along with additional async related properties.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/esql-async-query-api.html | Elasticsearch API documentation}
*/
asyncQuery(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithOutMeta): Promise<T.TODO>;
asyncQuery(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.TODO, unknown>>;
asyncQuery(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptions): Promise<T.TODO>;
asyncQuery(this: That, params: T.EsqlAsyncQueryRequest | TB.EsqlAsyncQueryRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.EsqlAsyncQueryResponse>;
asyncQuery(this: That, params: T.EsqlAsyncQueryRequest | TB.EsqlAsyncQueryRequest, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.EsqlAsyncQueryResponse, unknown>>;
asyncQuery(this: That, params: T.EsqlAsyncQueryRequest | TB.EsqlAsyncQueryRequest, options?: TransportRequestOptions): Promise<T.EsqlAsyncQueryResponse>;
/**
* Retrieves the results of a previously submitted async query request given its ID.
* Delete an async ES|QL query. If the query is still running, it is cancelled. Otherwise, the stored results are deleted. If the Elasticsearch security features are enabled, only the following users can use this API to delete a query: * The authenticated user that submitted the original query request * Users with the `cancel_task` cluster privilege
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/esql-async-query-delete-api.html | Elasticsearch API documentation}
*/
asyncQueryDelete(this: That, params: T.EsqlAsyncQueryDeleteRequest | TB.EsqlAsyncQueryDeleteRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.EsqlAsyncQueryDeleteResponse>;
asyncQueryDelete(this: That, params: T.EsqlAsyncQueryDeleteRequest | TB.EsqlAsyncQueryDeleteRequest, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.EsqlAsyncQueryDeleteResponse, unknown>>;
asyncQueryDelete(this: That, params: T.EsqlAsyncQueryDeleteRequest | TB.EsqlAsyncQueryDeleteRequest, options?: TransportRequestOptions): Promise<T.EsqlAsyncQueryDeleteResponse>;
/**
* Get async ES|QL query results. Get the current status and available results or stored results for an ES|QL asynchronous query. If the Elasticsearch security features are enabled, only the user who first submitted the ES|QL query can retrieve the results using this API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/esql-async-query-get-api.html | Elasticsearch API documentation}
*/
asyncQueryGet(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithOutMeta): Promise<T.TODO>;
asyncQueryGet(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.TODO, unknown>>;
asyncQueryGet(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptions): Promise<T.TODO>;
asyncQueryGet(this: That, params: T.EsqlAsyncQueryGetRequest | TB.EsqlAsyncQueryGetRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.EsqlAsyncQueryGetResponse>;
asyncQueryGet(this: That, params: T.EsqlAsyncQueryGetRequest | TB.EsqlAsyncQueryGetRequest, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.EsqlAsyncQueryGetResponse, unknown>>;
asyncQueryGet(this: That, params: T.EsqlAsyncQueryGetRequest | TB.EsqlAsyncQueryGetRequest, options?: TransportRequestOptions): Promise<T.EsqlAsyncQueryGetResponse>;
/**

@@ -26,0 +33,0 @@ * Run an ES|QL query. Get search results for an ES|QL (Elasticsearch query language) query.

@@ -33,10 +33,24 @@ "use strict";

const acceptedPath = [];
const acceptedBody = ['columnar', 'filter', 'locale', 'params', 'profile', 'query', 'tables', 'include_ccs_metadata'];
const querystring = {};
const body = undefined;
params = params !== null && params !== void 0 ? params : {};
// @ts-expect-error
const userBody = params === null || params === void 0 ? void 0 : params.body;
let body;
if (typeof userBody === 'string') {
body = userBody;
}
else {
body = userBody != null ? { ...userBody } : undefined;
}
for (const key in params) {
if (acceptedPath.includes(key)) {
if (acceptedBody.includes(key)) {
body = body !== null && body !== void 0 ? body : {};
// @ts-expect-error
body[key] = params[key];
}
else if (acceptedPath.includes(key)) {
continue;
}
else if (key !== 'body') {
// @ts-expect-error
querystring[key] = params[key];

@@ -52,2 +66,25 @@ }

}
async asyncQueryDelete(params, options) {
const acceptedPath = ['id'];
const querystring = {};
const body = undefined;
for (const key in params) {
if (acceptedPath.includes(key)) {
continue;
}
else if (key !== 'body') {
// @ts-expect-error
querystring[key] = params[key];
}
}
const method = 'DELETE';
const path = `/_query/async/${encodeURIComponent(params.id.toString())}`;
const meta = {
name: 'esql.async_query_delete',
pathParts: {
id: params.id
}
};
return await this.transport.request({ path, method, querystring, body, meta }, options);
}
async asyncQueryGet(params, options) {

@@ -57,3 +94,2 @@ const acceptedPath = ['id'];

const body = undefined;
params = params !== null && params !== void 0 ? params : {};
for (const key in params) {

@@ -64,2 +100,3 @@ if (acceptedPath.includes(key)) {

else if (key !== 'body') {
// @ts-expect-error
querystring[key] = params[key];

@@ -80,3 +117,3 @@ }

const acceptedPath = [];
const acceptedBody = ['columnar', 'filter', 'locale', 'params', 'profile', 'query', 'tables'];
const acceptedBody = ['columnar', 'filter', 'locale', 'params', 'profile', 'query', 'tables', 'include_ccs_metadata'];
const querystring = {};

@@ -83,0 +120,0 @@ // @ts-expect-error

@@ -8,3 +8,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Check for a document source. Checks if a document's `_source` is stored.
* Check for a document source. Check whether a document source exists in an index. For example: ``` HEAD my-index-000001/_source/1 ``` A document's source is not available if it is disabled in the mapping.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/docs-get.html | Elasticsearch API documentation}

@@ -11,0 +11,0 @@ */

@@ -8,3 +8,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Check a document. Checks if a specified document exists.
* Check a document. Verify that a document exists. For example, check to see if a document with the `_id` 0 exists: ``` HEAD my-index-000001/_doc/0 ``` If the document exists, the API returns a status code of `200 - OK`. If the document doesn’t exist, the API returns `404 - Not Found`. **Versioning support** You can use the `version` parameter to check the document only if its current version is equal to the specified one. Internally, Elasticsearch has marked the old document as deleted and added an entirely new document. The old version of the document doesn't disappear immediately, although you won't be able to access it. Elasticsearch cleans up deleted documents in the background as you continue to index more data.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/docs-get.html | Elasticsearch API documentation}

@@ -11,0 +11,0 @@ */

@@ -8,3 +8,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Explain a document match result. Returns information about why a specific document matches, or doesn’t match, a query.
* Explain a document match result. Get information about why a specific document matches, or doesn't match, a query. It computes a score explanation for a query and a specific document.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/search-explain.html | Elasticsearch API documentation}

@@ -11,0 +11,0 @@ */

@@ -11,3 +11,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Gets a list of features which can be included in snapshots using the feature_states field when creating a snapshot
* Get the features. Get a list of features that can be included in snapshots using the `feature_states` field when creating a snapshot. You can use this API to determine which feature states to include when taking a snapshot. By default, all feature states are included in a snapshot if that snapshot includes the global state, or none if it does not. A feature state includes one or more system indices necessary for a given feature to function. In order to ensure data integrity, all system indices that comprise a feature state are snapshotted and restored together. The features listed by this API are a combination of built-in features and features defined by plugins. In order for a feature state to be listed in this API and recognized as a valid feature state by the create snapshot API, the plugin that defines that feature must be installed on the master node.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/get-features-api.html | Elasticsearch API documentation}

@@ -19,3 +19,3 @@ */

/**
* Resets the internal state of features, usually by deleting system indices
* Reset the features. Clear all of the state information stored in system indices by Elasticsearch features, including the security and machine learning indices. WARNING: Intended for development and testing use only. Do not reset features on a production cluster. Return a cluster to the same state as a new installation by resetting the feature state for all Elasticsearch features. This deletes all state information stored in system indices. The response code is HTTP 200 if the state is successfully reset for all features. It is HTTP 500 if the reset operation failed for any feature. Note that select features might provide a way to reset particular system indices. Using this API resets all features, both those that are built-in and implemented as plugins. To list the features that will be affected, use the get features API. IMPORTANT: The features installed on the node you submit this request to are the features that will be reset. Run on the master node if you have any doubts about which plugins are installed on individual nodes.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/modules-snapshots.html | Elasticsearch API documentation}

@@ -22,0 +22,0 @@ */

@@ -9,3 +9,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

* Get script contexts. Get a list of supported script contexts and their methods.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/painless/8.17/painless-contexts.html | Elasticsearch API documentation}
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/get-script-contexts-api.html | Elasticsearch API documentation}
*/

@@ -12,0 +12,0 @@ export default function GetScriptContextApi(this: That, params?: T.GetScriptContextRequest | TB.GetScriptContextRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.GetScriptContextResponse>;

@@ -9,3 +9,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

* Get script languages. Get a list of available script types, languages, and contexts.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/modules-scripting.html | Elasticsearch API documentation}
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/get-script-languages-api.html | Elasticsearch API documentation}
*/

@@ -12,0 +12,0 @@ export default function GetScriptLanguagesApi(this: That, params?: T.GetScriptLanguagesRequest | TB.GetScriptLanguagesRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.GetScriptLanguagesResponse>;

@@ -8,3 +8,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Get a document's source. Returns the source of a document.
* Get a document's source. Get the source of a document. For example: ``` GET my-index-000001/_source/1 ``` You can use the source filtering parameters to control which parts of the `_source` are returned: ``` GET my-index-000001/_source/1/?_source_includes=*.id&_source_excludes=entities ```
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/docs-get.html | Elasticsearch API documentation}

@@ -11,0 +11,0 @@ */

@@ -8,3 +8,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Get a document by its ID. Retrieves the document with the specified ID from an index.
* Get a document by its ID. Get a document and its source or stored fields from an index. By default, this API is realtime and is not affected by the refresh rate of the index (when data will become visible for search). In the case where stored fields are requested with the `stored_fields` parameter and the document has been updated but is not yet refreshed, the API will have to parse and analyze the source to extract the stored fields. To turn off realtime behavior, set the `realtime` parameter to false. **Source filtering** By default, the API returns the contents of the `_source` field unless you have used the `stored_fields` parameter or the `_source` field is turned off. You can turn off `_source` retrieval by using the `_source` parameter: ``` GET my-index-000001/_doc/0?_source=false ``` If you only need one or two fields from the `_source`, use the `_source_includes` or `_source_excludes` parameters to include or filter out particular fields. This can be helpful with large documents where partial retrieval can save on network overhead Both parameters take a comma separated list of fields or wildcard expressions. For example: ``` GET my-index-000001/_doc/0?_source_includes=*.id&_source_excludes=entities ``` If you only want to specify includes, you can use a shorter notation: ``` GET my-index-000001/_doc/0?_source=*.id ``` **Routing** If routing is used during indexing, the routing value also needs to be specified to retrieve a document. For example: ``` GET my-index-000001/_doc/2?routing=user1 ``` This request gets the document with ID 2, but it is routed based on the user. The document is not fetched if the correct routing is not specified. **Distributed** The GET operation is hashed into a specific shard ID. It is then redirected to one of the replicas within that shard ID and returns the result. The replicas are the primary shard and its replicas within that shard ID group. This means that the more replicas you have, the better your GET scaling will be. **Versioning support** You can use the `version` parameter to retrieve the document only if its current version is equal to the specified one. Internally, Elasticsearch has marked the old document as deleted and added an entirely new document. The old version of the document doesn't disappear immediately, although you won't be able to access it. Elasticsearch cleans up deleted documents in the background as you continue to index more data.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/docs-get.html | Elasticsearch API documentation}

@@ -11,0 +11,0 @@ */

@@ -8,3 +8,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Returns the health of the cluster.
* Get the cluster health. Get a report with the health status of an Elasticsearch cluster. The report contains a list of indicators that compose Elasticsearch functionality. Each indicator has a health status of: green, unknown, yellow or red. The indicator will provide an explanation and metadata describing the reason for its current health status. The cluster’s status is controlled by the worst indicator status. In the event that an indicator’s status is non-green, a list of impacts may be present in the indicator result which detail the functionalities that are negatively affected by the health issue. Each impact carries with it a severity level, an area of the system that is affected, and a simple description of the impact on the system. Some health indicators can determine the root cause of a health problem and prescribe a set of steps that can be performed in order to improve the health of the system. The root cause and remediation steps are encapsulated in a diagnosis. A diagnosis contains a cause detailing a root cause analysis, an action containing a brief description of the steps to take to fix the problem, the list of affected resources (if applicable), and a detailed step-by-step troubleshooting guide to fix the diagnosed problem. NOTE: The health indicators perform root cause analysis of non-green health statuses. This can be computationally expensive when called frequently. When setting up automated polling of the API for health status, set verbose to false to disable the more expensive analysis logic.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/health-api.html | Elasticsearch API documentation}

@@ -11,0 +11,0 @@ */

@@ -11,3 +11,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Deletes the specified lifecycle policy definition. You cannot delete policies that are currently in use. If the policy is being used to manage any indices, the request fails and returns an error.
* Delete a lifecycle policy. You cannot delete policies that are currently in use. If the policy is being used to manage any indices, the request fails and returns an error.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/ilm-delete-lifecycle.html | Elasticsearch API documentation}

@@ -19,3 +19,3 @@ */

/**
* Retrieves information about the index’s current lifecycle state, such as the currently executing phase, action, and step. Shows when the index entered each one, the definition of the running phase, and information about any failures.
* Explain the lifecycle state. Get the current lifecycle status for one or more indices. For data streams, the API retrieves the current lifecycle status for the stream's backing indices. The response indicates when the index entered each lifecycle state, provides the definition of the running phase, and information about any failures.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/ilm-explain-lifecycle.html | Elasticsearch API documentation}

@@ -27,3 +27,3 @@ */

/**
* Retrieves a lifecycle policy.
* Get lifecycle policies.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/ilm-get-lifecycle.html | Elasticsearch API documentation}

@@ -35,3 +35,3 @@ */

/**
* Retrieves the current index lifecycle management (ILM) status.
* Get the ILM status. Get the current index lifecycle management status.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/ilm-get-status.html | Elasticsearch API documentation}

@@ -43,3 +43,3 @@ */

/**
* Switches the indices, ILM policies, and legacy, composable and component templates from using custom node attributes and attribute-based allocation filters to using data tiers, and optionally deletes one legacy index template.+ Using node roles enables ILM to automatically move the indices between data tiers.
* Migrate to data tiers routing. Switch the indices, ILM policies, and legacy, composable, and component templates from using custom node attributes and attribute-based allocation filters to using data tiers. Optionally, delete one legacy index template. Using node roles enables ILM to automatically move the indices between data tiers. Migrating away from custom node attributes routing can be manually performed. This API provides an automated way of performing three out of the four manual steps listed in the migration guide: 1. Stop setting the custom hot attribute on new indices. 1. Remove custom allocation settings from existing ILM policies. 1. Replace custom allocation settings from existing indices with the corresponding tier preference. ILM must be stopped before performing the migration. Use the stop ILM and get ILM status APIs to wait until the reported operation mode is `STOPPED`.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/ilm-migrate-to-data-tiers.html | Elasticsearch API documentation}

@@ -51,3 +51,3 @@ */

/**
* Manually moves an index into the specified step and executes that step.
* Move to a lifecycle step. Manually move an index into a specific step in the lifecycle policy and run that step. WARNING: This operation can result in the loss of data. Manually moving an index into a specific step runs that step even if it has already been performed. This is a potentially destructive action and this should be considered an expert level API. You must specify both the current step and the step to be executed in the body of the request. The request will fail if the current step does not match the step currently running for the index This is to prevent the index from being moved from an unexpected step into the next step. When specifying the target (`next_step`) to which the index will be moved, either the name or both the action and name fields are optional. If only the phase is specified, the index will move to the first step of the first action in the target phase. If the phase and action are specified, the index will move to the first step of the specified action in the specified phase. Only actions specified in the ILM policy are considered valid. An index cannot move to a step that is not part of its policy.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/ilm-move-to-step.html | Elasticsearch API documentation}

@@ -59,3 +59,3 @@ */

/**
* Creates a lifecycle policy. If the specified policy exists, the policy is replaced and the policy version is incremented.
* Create or update a lifecycle policy. If the specified policy exists, it is replaced and the policy version is incremented. NOTE: Only the latest version of the policy is stored, you cannot revert to previous versions.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/ilm-put-lifecycle.html | Elasticsearch API documentation}

@@ -67,3 +67,3 @@ */

/**
* Removes the assigned lifecycle policy and stops managing the specified index
* Remove policies from an index. Remove the assigned lifecycle policies from an index or a data stream's backing indices. It also stops managing the indices.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/ilm-remove-policy.html | Elasticsearch API documentation}

@@ -75,3 +75,3 @@ */

/**
* Retries executing the policy for an index that is in the ERROR step.
* Retry a policy. Retry running the lifecycle policy for an index that is in the ERROR step. The API sets the policy back to the step where the error occurred and runs the step. Use the explain lifecycle state API to determine whether an index is in the ERROR step.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/ilm-retry-policy.html | Elasticsearch API documentation}

@@ -83,3 +83,3 @@ */

/**
* Start the index lifecycle management (ILM) plugin.
* Start the ILM plugin. Start the index lifecycle management plugin if it is currently stopped. ILM is started automatically when the cluster is formed. Restarting ILM is necessary only when it has been stopped using the stop ILM API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/ilm-start.html | Elasticsearch API documentation}

@@ -91,3 +91,3 @@ */

/**
* Halts all lifecycle management operations and stops the index lifecycle management (ILM) plugin
* Stop the ILM plugin. Halt all lifecycle management operations and stop the index lifecycle management plugin. This is useful when you are performing maintenance on the cluster and need to prevent ILM from performing any actions on your indices. The API returns as soon as the stop request has been acknowledged, but the plugin might continue to run until in-progress operations complete and the plugin can be safely stopped. Use the get ILM status API to check whether ILM is running.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/ilm-stop.html | Elasticsearch API documentation}

@@ -94,0 +94,0 @@ */

@@ -8,3 +8,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Index a document. Adds a JSON document to the specified data stream or index and makes it searchable. If the target is an index and the document already exists, the request updates the document and increments its version.
* Create or update a document in an index. Add a JSON document to the specified data stream or index and make it searchable. If the target is an index and the document already exists, the request updates the document and increments its version. NOTE: You cannot use this API to send update requests for existing documents in a data stream. If the Elasticsearch security features are enabled, you must have the following index privileges for the target data stream, index, or index alias: * To add or overwrite a document using the `PUT /<target>/_doc/<_id>` request format, you must have the `create`, `index`, or `write` index privilege. * To add a document using the `POST /<target>/_doc/` request format, you must have the `create_doc`, `create`, `index`, or `write` index privilege. * To automatically create a data stream or index with this API request, you must have the `auto_configure`, `create_index`, or `manage` index privilege. Automatic data stream creation requires a matching index template with data stream enabled. NOTE: Replica shards might not all be started when an indexing operation returns successfully. By default, only the primary is required. Set `wait_for_active_shards` to change this default behavior. **Automatically create data streams and indices** If the request's target doesn't exist and matches an index template with a `data_stream` definition, the index operation automatically creates the data stream. If the target doesn't exist and doesn't match a data stream template, the operation automatically creates the index and applies any matching index templates. NOTE: Elasticsearch includes several built-in index templates. To avoid naming collisions with these templates, refer to index pattern documentation. If no mapping exists, the index operation creates a dynamic mapping. By default, new fields and objects are automatically added to the mapping if needed. Automatic index creation is controlled by the `action.auto_create_index` setting. If it is `true`, any index can be created automatically. You can modify this setting to explicitly allow or block automatic creation of indices that match specified patterns or set it to `false` to turn off automatic index creation entirely. Specify a comma-separated list of patterns you want to allow or prefix each pattern with `+` or `-` to indicate whether it should be allowed or blocked. When a list is specified, the default behaviour is to disallow. NOTE: The `action.auto_create_index` setting affects the automatic creation of indices only. It does not affect the creation of data streams. **Optimistic concurrency control** Index operations can be made conditional and only be performed if the last modification to the document was assigned the sequence number and primary term specified by the `if_seq_no` and `if_primary_term` parameters. If a mismatch is detected, the operation will result in a `VersionConflictException` and a status code of `409`. **Routing** By default, shard placement — or routing — is controlled by using a hash of the document's ID value. For more explicit control, the value fed into the hash function used by the router can be directly specified on a per-operation basis using the `routing` parameter. When setting up explicit mapping, you can also use the `_routing` field to direct the index operation to extract the routing value from the document itself. This does come at the (very minimal) cost of an additional document parsing pass. If the `_routing` mapping is defined and set to be required, the index operation will fail if no routing value is provided or extracted. NOTE: Data streams do not support custom routing unless they were created with the `allow_custom_routing` setting enabled in the template. * ** Distributed** The index operation is directed to the primary shard based on its route and performed on the actual node containing this shard. After the primary shard completes the operation, if needed, the update is distributed to applicable replicas. **Active shards** To improve the resiliency of writes to the system, indexing operations can be configured to wait for a certain number of active shard copies before proceeding with the operation. If the requisite number of active shard copies are not available, then the write operation must wait and retry, until either the requisite shard copies have started or a timeout occurs. By default, write operations only wait for the primary shards to be active before proceeding (that is to say `wait_for_active_shards` is `1`). This default can be overridden in the index settings dynamically by setting `index.write.wait_for_active_shards`. To alter this behavior per operation, use the `wait_for_active_shards request` parameter. Valid values are all or any positive integer up to the total number of configured copies per shard in the index (which is `number_of_replicas`+1). Specifying a negative value or a number greater than the number of shard copies will throw an error. For example, suppose you have a cluster of three nodes, A, B, and C and you create an index index with the number of replicas set to 3 (resulting in 4 shard copies, one more copy than there are nodes). If you attempt an indexing operation, by default the operation will only ensure the primary copy of each shard is available before proceeding. This means that even if B and C went down and A hosted the primary shard copies, the indexing operation would still proceed with only one copy of the data. If `wait_for_active_shards` is set on the request to `3` (and all three nodes are up), the indexing operation will require 3 active shard copies before proceeding. This requirement should be met because there are 3 active nodes in the cluster, each one holding a copy of the shard. However, if you set `wait_for_active_shards` to `all` (or to `4`, which is the same in this situation), the indexing operation will not proceed as you do not have all 4 copies of each shard active in the index. The operation will timeout unless a new node is brought up in the cluster to host the fourth copy of the shard. It is important to note that this setting greatly reduces the chances of the write operation not writing to the requisite number of shard copies, but it does not completely eliminate the possibility, because this check occurs before the write operation starts. After the write operation is underway, it is still possible for replication to fail on any number of shard copies but still succeed on the primary. The `_shards` section of the API response reveals the number of shard copies on which replication succeeded and failed. **No operation (noop) updates** When updating a document by using this API, a new version of the document is always created even if the document hasn't changed. If this isn't acceptable use the `_update` API with `detect_noop` set to `true`. The `detect_noop` option isn't available on this API because it doesn’t fetch the old source and isn't able to compare it against the new source. There isn't a definitive rule for when noop updates aren't acceptable. It's a combination of lots of factors like how frequently your data source sends updates that are actually noops and how many queries per second Elasticsearch runs on the shard receiving the updates. **Versioning** Each indexed document is given a version number. By default, internal versioning is used that starts at 1 and increments with each update, deletes included. Optionally, the version number can be set to an external value (for example, if maintained in a database). To enable this functionality, `version_type` should be set to `external`. The value provided must be a numeric, long value greater than or equal to 0, and less than around `9.2e+18`. NOTE: Versioning is completely real time, and is not affected by the near real time aspects of search operations. If no version is provided, the operation runs without any version checks. When using the external version type, the system checks to see if the version number passed to the index request is greater than the version of the currently stored document. If true, the document will be indexed and the new version number used. If the value provided is less than or equal to the stored document's version number, a version conflict will occur and the index operation will fail. For example: ``` PUT my-index-000001/_doc/1?version=2&version_type=external { "user": { "id": "elkbee" } } In this example, the operation will succeed since the supplied version of 2 is higher than the current document version of 1. If the document was already updated and its version was set to 2 or higher, the indexing command will fail and result in a conflict (409 HTTP status code). A nice side effect is that there is no need to maintain strict ordering of async indexing operations run as a result of changes to a source database, as long as version numbers from the source database are used. Even the simple case of updating the Elasticsearch index using data from a database is simplified if external versioning is used, as only the latest version will be used if the index operations arrive out of order.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/docs-index_.html | Elasticsearch API documentation}

@@ -11,0 +11,0 @@ */

@@ -32,3 +32,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Create an inference endpoint
* Create an inference endpoint. When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for `"state": "fully_allocated"` in the response and ensure that the `"allocation_count"` matches the `"target_allocation_count"`. Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources. IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Mistral, Azure OpenAI, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/put-inference-api.html | Elasticsearch API documentation}

@@ -40,9 +40,16 @@ */

/**
* Perform streaming inference
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/post-stream-inference-api.html | Elasticsearch API documentation}
* Perform streaming inference. Get real-time responses for completion tasks by delivering answers incrementally, reducing response times during computation. This API works only with the completion task type. IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs. This API requires the `monitor_inference` cluster privilege (the built-in `inference_admin` and `inference_user` roles grant this privilege). You must use a client that supports streaming.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/stream-inference-api.html | Elasticsearch API documentation}
*/
streamInference(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithOutMeta): Promise<T.TODO>;
streamInference(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.TODO, unknown>>;
streamInference(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptions): Promise<T.TODO>;
streamInference(this: That, params: T.InferenceStreamInferenceRequest | TB.InferenceStreamInferenceRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.InferenceStreamInferenceResponse>;
streamInference(this: That, params: T.InferenceStreamInferenceRequest | TB.InferenceStreamInferenceRequest, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.InferenceStreamInferenceResponse, unknown>>;
streamInference(this: That, params: T.InferenceStreamInferenceRequest | TB.InferenceStreamInferenceRequest, options?: TransportRequestOptions): Promise<T.InferenceStreamInferenceResponse>;
/**
* Update an inference endpoint. Modify `task_settings`, secrets (within `service_settings`), or `num_allocations` for an inference endpoint, depending on the specific endpoint service and `task_type`. IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/update-inference-api.html | Elasticsearch API documentation}
*/
update(this: That, params: T.InferenceUpdateRequest | TB.InferenceUpdateRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.InferenceUpdateResponse>;
update(this: That, params: T.InferenceUpdateRequest | TB.InferenceUpdateRequest, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.InferenceUpdateResponse, unknown>>;
update(this: That, params: T.InferenceUpdateRequest | TB.InferenceUpdateRequest, options?: TransportRequestOptions): Promise<T.InferenceUpdateResponse>;
}
export {};

@@ -187,10 +187,24 @@ "use strict";

const acceptedPath = ['inference_id', 'task_type'];
const acceptedBody = ['input'];
const querystring = {};
const body = undefined;
params = params !== null && params !== void 0 ? params : {};
// @ts-expect-error
const userBody = params === null || params === void 0 ? void 0 : params.body;
let body;
if (typeof userBody === 'string') {
body = userBody;
}
else {
body = userBody != null ? { ...userBody } : undefined;
}
for (const key in params) {
if (acceptedPath.includes(key)) {
if (acceptedBody.includes(key)) {
body = body !== null && body !== void 0 ? body : {};
// @ts-expect-error
body[key] = params[key];
}
else if (acceptedPath.includes(key)) {
continue;
}
else if (key !== 'body') {
// @ts-expect-error
querystring[key] = params[key];

@@ -218,4 +232,43 @@ }

}
async update(params, options) {
var _a;
const acceptedPath = ['inference_id', 'task_type'];
const acceptedBody = ['inference_config'];
const querystring = {};
// @ts-expect-error
let body = (_a = params.body) !== null && _a !== void 0 ? _a : undefined;
for (const key in params) {
if (acceptedBody.includes(key)) {
// @ts-expect-error
body = params[key];
}
else if (acceptedPath.includes(key)) {
continue;
}
else if (key !== 'body') {
// @ts-expect-error
querystring[key] = params[key];
}
}
let method = '';
let path = '';
if (params.task_type != null && params.inference_id != null) {
method = 'POST';
path = `/_inference/${encodeURIComponent(params.task_type.toString())}/${encodeURIComponent(params.inference_id.toString())}/_update`;
}
else {
method = 'POST';
path = `/_inference/${encodeURIComponent(params.inference_id.toString())}/_update`;
}
const meta = {
name: 'inference.update',
pathParts: {
inference_id: params.inference_id,
task_type: params.task_type
}
};
return await this.transport.request({ path, method, querystring, body, meta }, options);
}
}
exports.default = Inference;
//# sourceMappingURL=inference.js.map

@@ -8,4 +8,4 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Get cluster info. Returns basic information about the cluster.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/index.html | Elasticsearch API documentation}
* Get cluster info. Get basic build, version, and cluster information.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/rest-api-root.html | Elasticsearch API documentation}
*/

@@ -12,0 +12,0 @@ export default function InfoApi(this: That, params?: T.InfoRequest | TB.InfoRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.InfoResponse>;

@@ -18,8 +18,8 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Deletes an ip location database configuration
* Delete IP geolocation database configurations.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/delete-ip-location-database-api.html | Elasticsearch API documentation}
*/
deleteIpLocationDatabase(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithOutMeta): Promise<T.TODO>;
deleteIpLocationDatabase(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.TODO, unknown>>;
deleteIpLocationDatabase(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptions): Promise<T.TODO>;
deleteIpLocationDatabase(this: That, params: T.IngestDeleteIpLocationDatabaseRequest | TB.IngestDeleteIpLocationDatabaseRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.IngestDeleteIpLocationDatabaseResponse>;
deleteIpLocationDatabase(this: That, params: T.IngestDeleteIpLocationDatabaseRequest | TB.IngestDeleteIpLocationDatabaseRequest, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.IngestDeleteIpLocationDatabaseResponse, unknown>>;
deleteIpLocationDatabase(this: That, params: T.IngestDeleteIpLocationDatabaseRequest | TB.IngestDeleteIpLocationDatabaseRequest, options?: TransportRequestOptions): Promise<T.IngestDeleteIpLocationDatabaseResponse>;
/**

@@ -47,8 +47,8 @@ * Delete pipelines. Delete one or more ingest pipelines.

/**
* Returns the specified ip location database configuration
* Get IP geolocation database configurations.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/get-ip-location-database-api.html | Elasticsearch API documentation}
*/
getIpLocationDatabase(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithOutMeta): Promise<T.TODO>;
getIpLocationDatabase(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.TODO, unknown>>;
getIpLocationDatabase(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptions): Promise<T.TODO>;
getIpLocationDatabase(this: That, params?: T.IngestGetIpLocationDatabaseRequest | TB.IngestGetIpLocationDatabaseRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.IngestGetIpLocationDatabaseResponse>;
getIpLocationDatabase(this: That, params?: T.IngestGetIpLocationDatabaseRequest | TB.IngestGetIpLocationDatabaseRequest, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.IngestGetIpLocationDatabaseResponse, unknown>>;
getIpLocationDatabase(this: That, params?: T.IngestGetIpLocationDatabaseRequest | TB.IngestGetIpLocationDatabaseRequest, options?: TransportRequestOptions): Promise<T.IngestGetIpLocationDatabaseResponse>;
/**

@@ -69,3 +69,3 @@ * Get pipelines. Get information about one or more ingest pipelines. This API returns a local reference of the pipeline.

/**
* Create or update GeoIP database configurations. Create or update IP geolocation database configurations.
* Create or update a GeoIP database configuration. Refer to the create or update IP geolocation database configuration API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/put-geoip-database-api.html | Elasticsearch API documentation}

@@ -77,8 +77,8 @@ */

/**
* Puts the configuration for a ip location database to be downloaded
* Create or update an IP geolocation database configuration.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/put-ip-location-database-api.html | Elasticsearch API documentation}
*/
putIpLocationDatabase(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithOutMeta): Promise<T.TODO>;
putIpLocationDatabase(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.TODO, unknown>>;
putIpLocationDatabase(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptions): Promise<T.TODO>;
putIpLocationDatabase(this: That, params: T.IngestPutIpLocationDatabaseRequest | TB.IngestPutIpLocationDatabaseRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.IngestPutIpLocationDatabaseResponse>;
putIpLocationDatabase(this: That, params: T.IngestPutIpLocationDatabaseRequest | TB.IngestPutIpLocationDatabaseRequest, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.IngestPutIpLocationDatabaseResponse, unknown>>;
putIpLocationDatabase(this: That, params: T.IngestPutIpLocationDatabaseRequest | TB.IngestPutIpLocationDatabaseRequest, options?: TransportRequestOptions): Promise<T.IngestPutIpLocationDatabaseResponse>;
/**

@@ -85,0 +85,0 @@ * Create or update a pipeline. Changes made using this API take effect immediately.

@@ -58,3 +58,2 @@ "use strict";

const body = undefined;
params = params !== null && params !== void 0 ? params : {};
for (const key in params) {

@@ -65,2 +64,3 @@ if (acceptedPath.includes(key)) {

else if (key !== 'body') {
// @ts-expect-error
querystring[key] = params[key];

@@ -165,2 +165,3 @@ }

else if (key !== 'body') {
// @ts-expect-error
querystring[key] = params[key];

@@ -278,11 +279,18 @@ }

async putIpLocationDatabase(params, options) {
var _a;
const acceptedPath = ['id'];
const acceptedBody = ['configuration'];
const querystring = {};
const body = undefined;
params = params !== null && params !== void 0 ? params : {};
// @ts-expect-error
let body = (_a = params.body) !== null && _a !== void 0 ? _a : undefined;
for (const key in params) {
if (acceptedPath.includes(key)) {
if (acceptedBody.includes(key)) {
// @ts-expect-error
body = params[key];
}
else if (acceptedPath.includes(key)) {
continue;
}
else if (key !== 'body') {
// @ts-expect-error
querystring[key] = params[key];

@@ -289,0 +297,0 @@ }

@@ -8,4 +8,4 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Run a knn search. NOTE: The kNN search API has been replaced by the `knn` option in the search API. Perform a k-nearest neighbor (kNN) search on a dense_vector field and return the matching documents. Given a query vector, the API finds the k closest vectors and returns those documents as search hits. Elasticsearch uses the HNSW algorithm to support efficient kNN search. Like most kNN algorithms, HNSW is an approximate method that sacrifices result accuracy for improved search speed. This means the results returned are not always the true k closest neighbors. The kNN search API supports restricting the search using a filter. The search will return the top k documents that also match the filter query.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/search-search.html | Elasticsearch API documentation}
* Run a knn search. NOTE: The kNN search API has been replaced by the `knn` option in the search API. Perform a k-nearest neighbor (kNN) search on a dense_vector field and return the matching documents. Given a query vector, the API finds the k closest vectors and returns those documents as search hits. Elasticsearch uses the HNSW algorithm to support efficient kNN search. Like most kNN algorithms, HNSW is an approximate method that sacrifices result accuracy for improved search speed. This means the results returned are not always the true k closest neighbors. The kNN search API supports restricting the search using a filter. The search will return the top k documents that also match the filter query. A kNN search response has the exact same structure as a search API response. However, certain sections have a meaning specific to kNN search: * The document `_score` is determined by the similarity between the query and document vector. * The `hits.total` object contains the total number of nearest neighbor candidates considered, which is `num_candidates * num_shards`. The `hits.total.relation` will always be `eq`, indicating an exact value.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/knn-search-api.html | Elasticsearch API documentation}
*/

@@ -12,0 +12,0 @@ export default function KnnSearchApi<TDocument = unknown>(this: That, params: T.KnnSearchRequest | TB.KnnSearchRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.KnnSearchResponse<TDocument>>;

@@ -11,3 +11,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Deletes licensing information for the cluster
* Delete the license. When the license expires, your subscription level reverts to Basic. If the operator privileges feature is enabled, only operator users can use this API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/delete-license.html | Elasticsearch API documentation}

@@ -19,3 +19,3 @@ */

/**
* Get license information. Returns information about your Elastic license, including its type, its status, when it was issued, and when it expires. For more information about the different types of licenses, refer to [Elastic Stack subscriptions](https://www.elastic.co/subscriptions).
* Get license information. Get information about your Elastic license including its type, its status, when it was issued, and when it expires. NOTE: If the master node is generating a new cluster state, the get license API may return a `404 Not Found` response. If you receive an unexpected 404 response after cluster startup, wait a short period and retry the request.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/get-license.html | Elasticsearch API documentation}

@@ -27,3 +27,3 @@ */

/**
* Retrieves information about the status of the basic license.
* Get the basic license status.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/get-basic-status.html | Elasticsearch API documentation}

@@ -35,3 +35,3 @@ */

/**
* Retrieves information about the status of the trial license.
* Get the trial status.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/get-trial-status.html | Elasticsearch API documentation}

@@ -43,3 +43,3 @@ */

/**
* Updates the license for the cluster.
* Update the license. You can update your license at runtime without shutting down your nodes. License updates take effect immediately. If the license you are installing does not support all of the features that were available with your previous license, however, you are notified in the response. You must then re-submit the API request with the acknowledge parameter set to true. NOTE: If Elasticsearch security features are enabled and you are installing a gold or higher license, you must enable TLS on the transport networking layer before you install the license. If the operator privileges feature is enabled, only operator users can use this API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/update-license.html | Elasticsearch API documentation}

@@ -51,3 +51,3 @@ */

/**
* The start basic API enables you to initiate an indefinite basic license, which gives access to all the basic features. If the basic license does not support all of the features that are available with your current license, however, you are notified in the response. You must then re-submit the API request with the acknowledge parameter set to true. To check the status of your basic license, use the following API: [Get basic status](https://www.elastic.co/guide/en/elasticsearch/reference/current/get-basic-status.html).
* Start a basic license. Start an indefinite basic license, which gives access to all the basic features. NOTE: In order to start a basic license, you must not currently have a basic license. If the basic license does not support all of the features that are available with your current license, however, you are notified in the response. You must then re-submit the API request with the `acknowledge` parameter set to `true`. To check the status of your basic license, use the get basic license API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/start-basic.html | Elasticsearch API documentation}

@@ -59,3 +59,3 @@ */

/**
* The start trial API enables you to start a 30-day trial, which gives access to all subscription features.
* Start a trial. Start a 30-day trial, which gives access to all subscription features. NOTE: You are allowed to start a trial only if your cluster has not already activated a trial for the current major product version. For example, if you have already activated a trial for v8.0, you cannot start a new trial until v9.0. You can, however, request an extended trial at https://www.elastic.co/trialextension. To check the status of your trial, use the get trial status API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/start-trial.html | Elasticsearch API documentation}

@@ -62,0 +62,0 @@ */

@@ -11,3 +11,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Deletes a pipeline used for Logstash Central Management.
* Delete a Logstash pipeline. Delete a pipeline that is used for Logstash Central Management. If the request succeeds, you receive an empty response with an appropriate status code.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/logstash-api-delete-pipeline.html | Elasticsearch API documentation}

@@ -19,3 +19,3 @@ */

/**
* Retrieves pipelines used for Logstash Central Management.
* Get Logstash pipelines. Get pipelines that are used for Logstash Central Management.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/logstash-api-get-pipeline.html | Elasticsearch API documentation}

@@ -27,3 +27,3 @@ */

/**
* Creates or updates a pipeline used for Logstash Central Management.
* Create or update a Logstash pipeline. Create a pipeline that is used for Logstash Central Management. If the specified pipeline exists, it is replaced.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/logstash-api-put-pipeline.html | Elasticsearch API documentation}

@@ -30,0 +30,0 @@ */

@@ -8,3 +8,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Get multiple documents. Get multiple JSON documents by ID from one or more indices. If you specify an index in the request URI, you only need to specify the document IDs in the request body. To ensure fast responses, this multi get (mget) API responds with partial results if one or more shards fail.
* Get multiple documents. Get multiple JSON documents by ID from one or more indices. If you specify an index in the request URI, you only need to specify the document IDs in the request body. To ensure fast responses, this multi get (mget) API responds with partial results if one or more shards fail. **Filter source fields** By default, the `_source` field is returned for every document (if stored). Use the `_source` and `_source_include` or `source_exclude` attributes to filter what fields are returned for a particular document. You can include the `_source`, `_source_includes`, and `_source_excludes` query parameters in the request URI to specify the defaults to use when there are no per-document instructions. **Get stored fields** Use the `stored_fields` attribute to specify the set of stored fields you want to retrieve. Any requested fields that are not stored are ignored. You can include the `stored_fields` query parameter in the request URI to specify the defaults to use when there are no per-document instructions.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/docs-multi-get.html | Elasticsearch API documentation}

@@ -11,0 +11,0 @@ */

@@ -11,3 +11,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Retrieves information about different cluster, node, and index level settings that use deprecated features that will be removed or changed in the next major version.
* Get deprecation information. Get information about different cluster, node, and index level settings that use deprecated features that will be removed or changed in the next major version. TIP: This APIs is designed for indirect use by the Upgrade Assistant. You are strongly recommended to use the Upgrade Assistant.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/migration-api-deprecation.html | Elasticsearch API documentation}

@@ -19,4 +19,4 @@ */

/**
* Find out whether system features need to be upgraded or not
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/migration-api-feature-upgrade.html | Elasticsearch API documentation}
* Get feature migration information. Version upgrades sometimes require changes to how features store configuration information and data in system indices. Check which features need to be migrated and the status of any migrations that are in progress. TIP: This API is designed for indirect use by the Upgrade Assistant. You are strongly recommended to use the Upgrade Assistant.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/feature-migration-api.html | Elasticsearch API documentation}
*/

@@ -27,4 +27,4 @@ getFeatureUpgradeStatus(this: That, params?: T.MigrationGetFeatureUpgradeStatusRequest | TB.MigrationGetFeatureUpgradeStatusRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.MigrationGetFeatureUpgradeStatusResponse>;

/**
* Begin upgrades for system features
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/migration-api-feature-upgrade.html | Elasticsearch API documentation}
* Start the feature migration. Version upgrades sometimes require changes to how features store configuration information and data in system indices. This API starts the automatic migration process. Some functionality might be temporarily unavailable during the migration process. TIP: The API is designed for indirect use by the Upgrade Assistant. We strongly recommend you use the Upgrade Assistant.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/feature-migration-api.html | Elasticsearch API documentation}
*/

@@ -31,0 +31,0 @@ postFeatureUpgrade(this: That, params?: T.MigrationPostFeatureUpgradeRequest | TB.MigrationPostFeatureUpgradeRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.MigrationPostFeatureUpgradeResponse>;

@@ -11,3 +11,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Used by the monitoring features to send monitoring data.
* Send monitoring data. This API is used by the monitoring features to send monitoring data.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/monitor-elasticsearch-cluster.html | Elasticsearch API documentation}

@@ -14,0 +14,0 @@ */

@@ -8,4 +8,4 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Run multiple templated searches.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/search-multi-search.html | Elasticsearch API documentation}
* Run multiple templated searches. Run multiple templated searches with a single request. If you are providing a text file or text input to `curl`, use the `--data-binary` flag instead of `-d` to preserve newlines. For example: ``` $ cat requests { "index": "my-index" } { "id": "my-search-template", "params": { "query_string": "hello world", "from": 0, "size": 10 }} { "index": "my-other-index" } { "id": "my-other-search-template", "params": { "query_type": "match_all" }} $ curl -H "Content-Type: application/x-ndjson" -XGET localhost:9200/_msearch/template --data-binary "@requests"; echo ```
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/multi-search-template.html | Elasticsearch API documentation}
*/

@@ -12,0 +12,0 @@ export default function MsearchTemplateApi<TDocument = unknown, TAggregations = Record<T.AggregateName, T.AggregationsAggregate>>(this: That, params: T.MsearchTemplateRequest | TB.MsearchTemplateRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.MsearchTemplateResponse<TDocument, TAggregations>>;

@@ -8,3 +8,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Get multiple term vectors. You can specify existing documents by index and ID or provide artificial documents in the body of the request. You can specify the index in the request body or request URI. The response contains a `docs` array with all the fetched termvectors. Each element has the structure provided by the termvectors API.
* Get multiple term vectors. Get multiple term vectors with a single request. You can specify existing documents by index and ID or provide artificial documents in the body of the request. You can specify the index in the request body or request URI. The response contains a `docs` array with all the fetched termvectors. Each element has the structure provided by the termvectors API. **Artificial documents** You can also use `mtermvectors` to generate term vectors for artificial documents provided in the body of the request. The mapping used is determined by the specified `_index`.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/docs-multi-termvectors.html | Elasticsearch API documentation}

@@ -11,0 +11,0 @@ */

@@ -11,3 +11,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* You can use this API to clear the archived repositories metering information in the cluster.
* Clear the archived repositories metering. Clear the archived repositories metering information in the cluster.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/clear-repositories-metering-archive-api.html | Elasticsearch API documentation}

@@ -19,3 +19,3 @@ */

/**
* You can use the cluster repositories metering API to retrieve repositories metering information in a cluster. This API exposes monotonically non-decreasing counters and it’s expected that clients would durably store the information needed to compute aggregations over a period of time. Additionally, the information exposed by this API is volatile, meaning that it won’t be present after node restarts.
* Get cluster repositories metering. Get repositories metering information for a cluster. This API exposes monotonically non-decreasing counters and it is expected that clients would durably store the information needed to compute aggregations over a period of time. Additionally, the information exposed by this API is volatile, meaning that it will not be present after node restarts.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/get-repositories-metering-api.html | Elasticsearch API documentation}

@@ -27,3 +27,3 @@ */

/**
* This API yields a breakdown of the hot threads on each selected node in the cluster. The output is plain text with a breakdown of each node’s top hot threads.
* Get the hot threads for nodes. Get a breakdown of the hot threads on each selected node in the cluster. The output is plain text with a breakdown of the top hot threads for each node.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cluster-nodes-hot-threads.html | Elasticsearch API documentation}

@@ -35,3 +35,3 @@ */

/**
* Returns cluster nodes information.
* Get node information. By default, the API returns all attributes and core settings for cluster nodes.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cluster-nodes-info.html | Elasticsearch API documentation}

@@ -43,3 +43,3 @@ */

/**
* Reloads the keystore on nodes in the cluster.
* Reload the keystore on nodes in the cluster. Secure settings are stored in an on-disk keystore. Certain of these settings are reloadable. That is, you can change them on disk and reload them without restarting any nodes in the cluster. When you have updated reloadable secure settings in your keystore, you can use this API to reload those settings on each node. When the Elasticsearch keystore is password protected and not simply obfuscated, you must provide the password for the keystore when you reload the secure settings. Reloading the settings for the whole cluster assumes that the keystores for all nodes are protected with the same password; this method is allowed only when inter-node communications are encrypted. Alternatively, you can reload the secure settings on each node by locally accessing the API and passing the node-specific Elasticsearch keystore password.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/secure-settings.html#reloadable-secure-settings | Elasticsearch API documentation}

@@ -51,3 +51,3 @@ */

/**
* Returns cluster nodes statistics.
* Get node statistics. Get statistics for nodes in a cluster. By default, all stats are returned. You can limit the returned information by using metrics.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cluster-nodes-stats.html | Elasticsearch API documentation}

@@ -59,3 +59,3 @@ */

/**
* Returns information on the usage of features.
* Get feature usage information.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cluster-nodes-usage.html | Elasticsearch API documentation}

@@ -62,0 +62,0 @@ */

@@ -8,3 +8,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Open a point in time. A search request by default runs against the most recent visible data of the target indices, which is called point in time. Elasticsearch pit (point in time) is a lightweight view into the state of the data as it existed when initiated. In some cases, it’s preferred to perform multiple search requests using the same point in time. For example, if refreshes happen between `search_after` requests, then the results of those requests might not be consistent as changes happening between searches are only visible to the more recent point in time. A point in time must be opened explicitly before being used in search requests. The `keep_alive` parameter tells Elasticsearch how long it should persist.
* Open a point in time. A search request by default runs against the most recent visible data of the target indices, which is called point in time. Elasticsearch pit (point in time) is a lightweight view into the state of the data as it existed when initiated. In some cases, it’s preferred to perform multiple search requests using the same point in time. For example, if refreshes happen between `search_after` requests, then the results of those requests might not be consistent as changes happening between searches are only visible to the more recent point in time. A point in time must be opened explicitly before being used in search requests. A subsequent search request with the `pit` parameter must not specify `index`, `routing`, or `preference` values as these parameters are copied from the point in time. Just like regular searches, you can use `from` and `size` to page through point in time search results, up to the first 10,000 hits. If you want to retrieve more hits, use PIT with `search_after`. IMPORTANT: The open point in time request and each subsequent search request can return different identifiers; always use the most recently received ID for the next search request. When a PIT that contains shard failures is used in a search request, the missing are always reported in the search response as a `NoShardAvailableActionException` exception. To get rid of these exceptions, a new PIT needs to be created so that shards missing from the previous PIT can be handled, assuming they become available in the meantime. **Keeping point in time alive** The `keep_alive` parameter, which is passed to a open point in time request and search request, extends the time to live of the corresponding point in time. The value does not need to be long enough to process all data — it just needs to be long enough for the next request. Normally, the background merge process optimizes the index by merging together smaller segments to create new, bigger segments. Once the smaller segments are no longer needed they are deleted. However, open point-in-times prevent the old segments from being deleted since they are still in use. TIP: Keeping older segments alive means that more disk space and file handles are needed. Ensure that you have configured your nodes to have ample free file handles. Additionally, if a segment contains deleted or updated documents then the point in time must keep track of whether each document in the segment was live at the time of the initial search request. Ensure that your nodes have sufficient heap space if you have many open point-in-times on an index that is subject to ongoing deletes or updates. Note that a point-in-time doesn't prevent its associated indices from being deleted. You can check how many point-in-times (that is, search contexts) are open with the nodes stats API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/point-in-time-api.html | Elasticsearch API documentation}

@@ -11,0 +11,0 @@ */

@@ -8,3 +8,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Ping the cluster. Returns whether the cluster is running.
* Ping the cluster. Get information about whether the cluster is running.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/index.html | Elasticsearch API documentation}

@@ -11,0 +11,0 @@ */

@@ -9,3 +9,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

* Create or update a script or search template. Creates or updates a stored script or search template.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/modules-scripting.html | Elasticsearch API documentation}
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/create-stored-script-api.html | Elasticsearch API documentation}
*/

@@ -12,0 +12,0 @@ export default function PutScriptApi(this: That, params: T.PutScriptRequest | TB.PutScriptRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.PutScriptResponse>;

@@ -11,3 +11,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Delete a query rule. Delete a query rule within a query ruleset.
* Delete a query rule. Delete a query rule within a query ruleset. This is a destructive action that is only recoverable by re-adding the same rule with the create or update query rule API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/delete-query-rule.html | Elasticsearch API documentation}

@@ -19,3 +19,3 @@ */

/**
* Delete a query ruleset.
* Delete a query ruleset. Remove a query ruleset and its associated data. This is a destructive action that is not recoverable.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/delete-query-ruleset.html | Elasticsearch API documentation}

@@ -48,3 +48,3 @@ */

/**
* Create or update a query rule. Create or update a query rule within a query ruleset.
* Create or update a query rule. Create or update a query rule within a query ruleset. IMPORTANT: Due to limitations within pinned queries, you can only pin documents using ids or docs, but cannot use both in single rule. It is advised to use one or the other in query rulesets, to avoid errors. Additionally, pinned queries have a maximum limit of 100 pinned hits. If multiple matching rules pin more than 100 documents, only the first 100 documents are pinned in the order they are specified in the ruleset.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/put-query-rule.html | Elasticsearch API documentation}

@@ -56,3 +56,3 @@ */

/**
* Create or update a query ruleset.
* Create or update a query ruleset. There is a limit of 100 rules per ruleset. This limit can be increased by using the `xpack.applications.rules.max_rules_per_ruleset` cluster setting. IMPORTANT: Due to limitations within pinned queries, you can only select documents using `ids` or `docs`, but cannot use both in single rule. It is advised to use one or the other in query rulesets, to avoid errors. Additionally, pinned queries have a maximum limit of 100 pinned hits. If multiple matching rules pin more than 100 documents, only the first 100 documents are pinned in the order they are specified in the ruleset.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/put-query-ruleset.html | Elasticsearch API documentation}

@@ -59,0 +59,0 @@ */

@@ -8,3 +8,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Throttle a reindex operation. Change the number of requests per second for a particular reindex operation.
* Throttle a reindex operation. Change the number of requests per second for a particular reindex operation. For example: ``` POST _reindex/r1A2WoRbTwKZ516z6NEs5A:36619/_rethrottle?requests_per_second=-1 ``` Rethrottling that speeds up the query takes effect immediately. Rethrottling that slows down the query will take effect after completing the current batch. This behavior prevents scroll timeouts.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/docs-reindex.html | Elasticsearch API documentation}

@@ -11,0 +11,0 @@ */

@@ -8,3 +8,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Reindex documents. Copies documents from a source to a destination. The source can be any existing index, alias, or data stream. The destination must differ from the source. For example, you cannot reindex a data stream into itself.
* Reindex documents. Copy documents from a source to a destination. You can copy all documents to the destination index or reindex a subset of the documents. The source can be any existing index, alias, or data stream. The destination must differ from the source. For example, you cannot reindex a data stream into itself. IMPORTANT: Reindex requires `_source` to be enabled for all documents in the source. The destination should be configured as wanted before calling the reindex API. Reindex does not copy the settings from the source or its associated template. Mappings, shard counts, and replicas, for example, must be configured ahead of time. If the Elasticsearch security features are enabled, you must have the following security privileges: * The `read` index privilege for the source data stream, index, or alias. * The `write` index privilege for the destination data stream, index, or index alias. * To automatically create a data stream or index with a reindex API request, you must have the `auto_configure`, `create_index`, or `manage` index privilege for the destination data stream, index, or alias. * If reindexing from a remote cluster, the `source.remote.user` must have the `monitor` cluster privilege and the `read` index privilege for the source data stream, index, or alias. If reindexing from a remote cluster, you must explicitly allow the remote host in the `reindex.remote.whitelist` setting. Automatic data stream creation requires a matching index template with data stream enabled. The `dest` element can be configured like the index API to control optimistic concurrency control. Omitting `version_type` or setting it to `internal` causes Elasticsearch to blindly dump documents into the destination, overwriting any that happen to have the same ID. Setting `version_type` to `external` causes Elasticsearch to preserve the `version` from the source, create any documents that are missing, and update any documents that have an older version in the destination than they do in the source. Setting `op_type` to `create` causes the reindex API to create only missing documents in the destination. All existing documents will cause a version conflict. IMPORTANT: Because data streams are append-only, any reindex request to a destination data stream must have an `op_type` of `create`. A reindex can only add new documents to a destination data stream. It cannot update existing documents in a destination data stream. By default, version conflicts abort the reindex process. To continue reindexing if there are conflicts, set the `conflicts` request body property to `proceed`. In this case, the response includes a count of the version conflicts that were encountered. Note that the handling of other error types is unaffected by the `conflicts` property. Additionally, if you opt to count version conflicts, the operation could attempt to reindex more documents from the source than `max_docs` until it has successfully indexed `max_docs` documents into the target or it has gone through every document in the source query. NOTE: The reindex API makes no effort to handle ID collisions. The last document written will "win" but the order isn't usually predictable so it is not a good idea to rely on this behavior. Instead, make sure that IDs are unique by using a script. **Running reindex asynchronously** If the request contains `wait_for_completion=false`, Elasticsearch performs some preflight checks, launches the request, and returns a task you can use to cancel or get the status of the task. Elasticsearch creates a record of this task as a document at `_tasks/<task_id>`. **Reindex from multiple sources** If you have many sources to reindex it is generally better to reindex them one at a time rather than using a glob pattern to pick up multiple sources. That way you can resume the process if there are any errors by removing the partially completed source and starting over. It also makes parallelizing the process fairly simple: split the list of sources to reindex and run each list in parallel. For example, you can use a bash script like this: ``` for index in i1 i2 i3 i4 i5; do curl -HContent-Type:application/json -XPOST localhost:9200/_reindex?pretty -d'{ "source": { "index": "'$index'" }, "dest": { "index": "'$index'-reindexed" } }' done ``` ** Throttling** Set `requests_per_second` to any positive decimal number (`1.4`, `6`, `1000`, for example) to throttle the rate at which reindex issues batches of index operations. Requests are throttled by padding each batch with a wait time. To turn off throttling, set `requests_per_second` to `-1`. The throttling is done by waiting between batches so that the scroll that reindex uses internally can be given a timeout that takes into account the padding. The padding time is the difference between the batch size divided by the `requests_per_second` and the time spent writing. By default the batch size is `1000`, so if `requests_per_second` is set to `500`: ``` target_time = 1000 / 500 per second = 2 seconds wait_time = target_time - write_time = 2 seconds - .5 seconds = 1.5 seconds ``` Since the batch is issued as a single bulk request, large batch sizes cause Elasticsearch to create many requests and then wait for a while before starting the next set. This is "bursty" instead of "smooth". **Slicing** Reindex supports sliced scroll to parallelize the reindexing process. This parallelization can improve efficiency and provide a convenient way to break the request down into smaller parts. NOTE: Reindexing from remote clusters does not support manual or automatic slicing. You can slice a reindex request manually by providing a slice ID and total number of slices to each request. You can also let reindex automatically parallelize by using sliced scroll to slice on `_id`. The `slices` parameter specifies the number of slices to use. Adding `slices` to the reindex request just automates the manual process, creating sub-requests which means it has some quirks: * You can see these requests in the tasks API. These sub-requests are "child" tasks of the task for the request with slices. * Fetching the status of the task for the request with `slices` only contains the status of completed slices. * These sub-requests are individually addressable for things like cancellation and rethrottling. * Rethrottling the request with `slices` will rethrottle the unfinished sub-request proportionally. * Canceling the request with `slices` will cancel each sub-request. * Due to the nature of `slices`, each sub-request won't get a perfectly even portion of the documents. All documents will be addressed, but some slices may be larger than others. Expect larger slices to have a more even distribution. * Parameters like `requests_per_second` and `max_docs` on a request with `slices` are distributed proportionally to each sub-request. Combine that with the previous point about distribution being uneven and you should conclude that using `max_docs` with `slices` might not result in exactly `max_docs` documents being reindexed. * Each sub-request gets a slightly different snapshot of the source, though these are all taken at approximately the same time. If slicing automatically, setting `slices` to `auto` will choose a reasonable number for most indices. If slicing manually or otherwise tuning automatic slicing, use the following guidelines. Query performance is most efficient when the number of slices is equal to the number of shards in the index. If that number is large (for example, `500`), choose a lower number as too many slices will hurt performance. Setting slices higher than the number of shards generally does not improve efficiency and adds overhead. Indexing performance scales linearly across available resources with the number of slices. Whether query or indexing performance dominates the runtime depends on the documents being reindexed and cluster resources. **Modify documents during reindexing** Like `_update_by_query`, reindex operations support a script that modifies the document. Unlike `_update_by_query`, the script is allowed to modify the document's metadata. Just as in `_update_by_query`, you can set `ctx.op` to change the operation that is run on the destination. For example, set `ctx.op` to `noop` if your script decides that the document doesn’t have to be indexed in the destination. This "no operation" will be reported in the `noop` counter in the response body. Set `ctx.op` to `delete` if your script decides that the document must be deleted from the destination. The deletion will be reported in the `deleted` counter in the response body. Setting `ctx.op` to anything else will return an error, as will setting any other field in `ctx`. Think of the possibilities! Just be careful; you are able to change: * `_id` * `_index` * `_version` * `_routing` Setting `_version` to `null` or clearing it from the `ctx` map is just like not sending the version in an indexing request. It will cause the document to be overwritten in the destination regardless of the version on the target or the version type you use in the reindex API. **Reindex from remote** Reindex supports reindexing from a remote Elasticsearch cluster. The `host` parameter must contain a scheme, host, port, and optional path. The `username` and `password` parameters are optional and when they are present the reindex operation will connect to the remote Elasticsearch node using basic authentication. Be sure to use HTTPS when using basic authentication or the password will be sent in plain text. There are a range of settings available to configure the behavior of the HTTPS connection. When using Elastic Cloud, it is also possible to authenticate against the remote cluster through the use of a valid API key. Remote hosts must be explicitly allowed with the `reindex.remote.whitelist` setting. It can be set to a comma delimited list of allowed remote host and port combinations. Scheme is ignored; only the host and port are used. For example: ``` reindex.remote.whitelist: [otherhost:9200, another:9200, 127.0.10.*:9200, localhost:*"] ``` The list of allowed hosts must be configured on any nodes that will coordinate the reindex. This feature should work with remote clusters of any version of Elasticsearch. This should enable you to upgrade from any version of Elasticsearch to the current version by reindexing from a cluster of the old version. WARNING: Elasticsearch does not support forward compatibility across major versions. For example, you cannot reindex from a 7.x cluster into a 6.x cluster. To enable queries sent to older versions of Elasticsearch, the `query` parameter is sent directly to the remote host without validation or modification. NOTE: Reindexing from remote clusters does not support manual or automatic slicing. Reindexing from a remote server uses an on-heap buffer that defaults to a maximum size of 100mb. If the remote index includes very large documents you'll need to use a smaller batch size. It is also possible to set the socket read timeout on the remote connection with the `socket_timeout` field and the connection timeout with the `connect_timeout` field. Both default to 30 seconds. **Configuring SSL parameters** Reindex from remote supports configurable SSL settings. These must be specified in the `elasticsearch.yml` file, with the exception of the secure settings, which you add in the Elasticsearch keystore. It is not possible to configure SSL in the body of the reindex request.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/docs-reindex.html | Elasticsearch API documentation}

@@ -11,0 +11,0 @@ */

@@ -23,4 +23,4 @@ "use strict";

async function RenderSearchTemplateApi(params, options) {
const acceptedPath = ['id'];
const acceptedBody = ['file', 'params', 'source'];
const acceptedPath = [];
const acceptedBody = ['id', 'file', 'params', 'source'];
const querystring = {};

@@ -27,0 +27,0 @@ // @ts-expect-error

@@ -11,3 +11,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Deletes an existing rollup job.
* Delete a rollup job. A job must be stopped before it can be deleted. If you attempt to delete a started job, an error occurs. Similarly, if you attempt to delete a nonexistent job, an exception occurs. IMPORTANT: When you delete a job, you remove only the process that is actively monitoring and rolling up data. The API does not delete any previously rolled up data. This is by design; a user may wish to roll up a static data set. Because the data set is static, after it has been fully rolled up there is no need to keep the indexing rollup job around (as there will be no new data). Thus the job can be deleted, leaving behind the rolled up data for analysis. If you wish to also remove the rollup data and the rollup index contains the data for only a single job, you can delete the whole rollup index. If the rollup index stores data from several jobs, you must issue a delete-by-query that targets the rollup job's identifier in the rollup index. For example: ``` POST my_rollup_index/_delete_by_query { "query": { "term": { "_rollup.id": "the_rollup_job_id" } } } ```
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/rollup-delete-job.html | Elasticsearch API documentation}

@@ -19,3 +19,3 @@ */

/**
* Retrieves the configuration, stats, and status of rollup jobs.
* Get rollup job information. Get the configuration, stats, and status of rollup jobs. NOTE: This API returns only active (both `STARTED` and `STOPPED`) jobs. If a job was created, ran for a while, then was deleted, the API does not return any details about it. For details about a historical rollup job, the rollup capabilities API may be more useful.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/rollup-get-job.html | Elasticsearch API documentation}

@@ -27,3 +27,3 @@ */

/**
* Returns the capabilities of any rollup jobs that have been configured for a specific index or index pattern.
* Get the rollup job capabilities. Get the capabilities of any rollup jobs that have been configured for a specific index or index pattern. This API is useful because a rollup job is often configured to rollup only a subset of fields from the source index. Furthermore, only certain aggregations can be configured for various fields, leading to a limited subset of functionality depending on that configuration. This API enables you to inspect an index and determine: 1. Does this index have associated rollup data somewhere in the cluster? 2. If yes to the first question, what fields were rolled up, what aggregations can be performed, and where does the data live?
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/rollup-get-rollup-caps.html | Elasticsearch API documentation}

@@ -35,3 +35,3 @@ */

/**
* Returns the rollup capabilities of all jobs inside of a rollup index (for example, the index where rollup data is stored).
* Get the rollup index capabilities. Get the rollup capabilities of all jobs inside of a rollup index. A single rollup index may store the data for multiple rollup jobs and may have a variety of capabilities depending on those jobs. This API enables you to determine: * What jobs are stored in an index (or indices specified via a pattern)? * What target indices were rolled up, what fields were used in those rollups, and what aggregations can be performed on each job?
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/rollup-get-rollup-index-caps.html | Elasticsearch API documentation}

@@ -43,3 +43,3 @@ */

/**
* Creates a rollup job.
* Create a rollup job. WARNING: From 8.15.0, calling this API in a cluster with no rollup usage will fail with a message about the deprecation and planned removal of rollup features. A cluster needs to contain either a rollup job or a rollup index in order for this API to be allowed to run. The rollup job configuration contains all the details about how the job should run, when it indexes documents, and what future queries will be able to run against the rollup index. There are three main sections to the job configuration: the logistical details about the job (for example, the cron schedule), the fields that are used for grouping, and what metrics to collect for each group. Jobs are created in a `STOPPED` state. You can start them with the start rollup jobs API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/rollup-put-job.html | Elasticsearch API documentation}

@@ -51,3 +51,3 @@ */

/**
* Enables searching rolled-up data using the standard Query DSL.
* Search rolled-up data. The rollup search endpoint is needed because, internally, rolled-up documents utilize a different document structure than the original data. It rewrites standard Query DSL into a format that matches the rollup documents then takes the response and rewrites it back to what a client would expect given the original query. The request body supports a subset of features from the regular search API. The following functionality is not available: `size`: Because rollups work on pre-aggregated data, no search hits can be returned and so size must be set to zero or omitted entirely. `highlighter`, `suggestors`, `post_filter`, `profile`, `explain`: These are similarly disallowed. **Searching both historical rollup and non-rollup data** The rollup search API has the capability to search across both "live" non-rollup data and the aggregated rollup data. This is done by simply adding the live indices to the URI. For example: ``` GET sensor-1,sensor_rollup/_rollup_search { "size": 0, "aggregations": { "max_temperature": { "max": { "field": "temperature" } } } } ``` The rollup search endpoint does two things when the search runs: * The original request is sent to the non-rollup index unaltered. * A rewritten version of the original request is sent to the rollup index. When the two responses are received, the endpoint rewrites the rollup response and merges the two together. During the merging process, if there is any overlap in buckets between the two responses, the buckets from the non-rollup index are used.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/rollup-search.html | Elasticsearch API documentation}

@@ -59,3 +59,3 @@ */

/**
* Starts an existing, stopped rollup job.
* Start rollup jobs. If you try to start a job that does not exist, an exception occurs. If you try to start a job that is already started, nothing happens.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/rollup-start-job.html | Elasticsearch API documentation}

@@ -67,3 +67,3 @@ */

/**
* Stops an existing, started rollup job.
* Stop rollup jobs. If you try to stop a job that does not exist, an exception occurs. If you try to stop a job that is already stopped, nothing happens. Since only a stopped job can be deleted, it can be useful to block the API until the indexer has fully stopped. This is accomplished with the `wait_for_completion` query parameter, and optionally a timeout. For example: ``` POST _rollup/job/sensor/_stop?wait_for_completion=true&timeout=10s ``` The parameter blocks the API call from returning until either the job has moved to STOPPED or the specified time has elapsed. If the specified time elapses without the job moving to STOPPED, a timeout exception occurs.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/rollup-stop-job.html | Elasticsearch API documentation}

@@ -70,0 +70,0 @@ */

@@ -9,3 +9,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

* Run a scrolling search. IMPORTANT: The scroll API is no longer recommend for deep pagination. If you need to preserve the index state while paging through more than 10,000 hits, use the `search_after` parameter with a point in time (PIT). The scroll API gets large sets of results from a single scrolling search request. To get the necessary scroll ID, submit a search API request that includes an argument for the `scroll` query parameter. The `scroll` parameter indicates how long Elasticsearch should retain the search context for the request. The search response returns a scroll ID in the `_scroll_id` response body parameter. You can then use the scroll ID with the scroll API to retrieve the next batch of results for the request. If the Elasticsearch security features are enabled, the access to the results of a specific scroll ID is restricted to the user or API key that submitted the search. You can also use the scroll API to specify a new scroll parameter that extends or shortens the retention period for the search context. IMPORTANT: Results from a scrolling search reflect the state of the index at the time of the initial search request. Subsequent indexing or document changes only affect later search and scroll requests.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/search-request-body.html#request-body-search-scroll | Elasticsearch API documentation}
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/scroll-api.html | Elasticsearch API documentation}
*/

@@ -12,0 +12,0 @@ export default function ScrollApi<TDocument = unknown, TAggregations = Record<T.AggregateName, T.AggregationsAggregate>>(this: That, params: T.ScrollRequest | TB.ScrollRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.ScrollResponse<TDocument, TAggregations>>;

@@ -39,3 +39,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Returns the existing search applications.
* Get search applications. Get information about search applications.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/list-search-applications.html | Elasticsearch API documentation}

@@ -47,8 +47,8 @@ */

/**
* Creates a behavioral analytics event for existing collection.
* @see {@link http://todo.com/tbd | Elasticsearch API documentation}
* Create a behavioral analytics collection event.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/post-analytics-collection-event.html | Elasticsearch API documentation}
*/
postBehavioralAnalyticsEvent(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithOutMeta): Promise<T.TODO>;
postBehavioralAnalyticsEvent(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.TODO, unknown>>;
postBehavioralAnalyticsEvent(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptions): Promise<T.TODO>;
postBehavioralAnalyticsEvent(this: That, params: T.SearchApplicationPostBehavioralAnalyticsEventRequest | TB.SearchApplicationPostBehavioralAnalyticsEventRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.SearchApplicationPostBehavioralAnalyticsEventResponse>;
postBehavioralAnalyticsEvent(this: That, params: T.SearchApplicationPostBehavioralAnalyticsEventRequest | TB.SearchApplicationPostBehavioralAnalyticsEventRequest, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.SearchApplicationPostBehavioralAnalyticsEventResponse, unknown>>;
postBehavioralAnalyticsEvent(this: That, params: T.SearchApplicationPostBehavioralAnalyticsEventRequest | TB.SearchApplicationPostBehavioralAnalyticsEventRequest, options?: TransportRequestOptions): Promise<T.SearchApplicationPostBehavioralAnalyticsEventResponse>;
/**

@@ -69,8 +69,8 @@ * Create or update a search application.

/**
* Renders a query for given search application search parameters
* Render a search application query. Generate an Elasticsearch query using the specified query parameters and the search template associated with the search application or a default template if none is specified. If a parameter used in the search template is not specified in `params`, the parameter's default value will be used. The API returns the specific Elasticsearch query that would be generated and run by calling the search application search API. You must have `read` privileges on the backing alias of the search application.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/search-application-render-query.html | Elasticsearch API documentation}
*/
renderQuery(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithOutMeta): Promise<T.TODO>;
renderQuery(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.TODO, unknown>>;
renderQuery(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptions): Promise<T.TODO>;
renderQuery(this: That, params: T.SearchApplicationRenderQueryRequest | TB.SearchApplicationRenderQueryRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.SearchApplicationRenderQueryResponse>;
renderQuery(this: That, params: T.SearchApplicationRenderQueryRequest | TB.SearchApplicationRenderQueryRequest, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.SearchApplicationRenderQueryResponse, unknown>>;
renderQuery(this: That, params: T.SearchApplicationRenderQueryRequest | TB.SearchApplicationRenderQueryRequest, options?: TransportRequestOptions): Promise<T.SearchApplicationRenderQueryResponse>;
/**

@@ -77,0 +77,0 @@ * Run a search application search. Generate and run an Elasticsearch query that uses the specified query parameteter and the search template associated with the search application or default template. Unspecified template parameters are assigned their default values if applicable.

@@ -154,11 +154,18 @@ "use strict";

async postBehavioralAnalyticsEvent(params, options) {
var _a;
const acceptedPath = ['collection_name', 'event_type'];
const acceptedBody = ['payload'];
const querystring = {};
const body = undefined;
params = params !== null && params !== void 0 ? params : {};
// @ts-expect-error
let body = (_a = params.body) !== null && _a !== void 0 ? _a : undefined;
for (const key in params) {
if (acceptedPath.includes(key)) {
if (acceptedBody.includes(key)) {
// @ts-expect-error
body = params[key];
}
else if (acceptedPath.includes(key)) {
continue;
}
else if (key !== 'body') {
// @ts-expect-error
querystring[key] = params[key];

@@ -233,10 +240,24 @@ }

const acceptedPath = ['name'];
const acceptedBody = ['params'];
const querystring = {};
const body = undefined;
params = params !== null && params !== void 0 ? params : {};
// @ts-expect-error
const userBody = params === null || params === void 0 ? void 0 : params.body;
let body;
if (typeof userBody === 'string') {
body = userBody;
}
else {
body = userBody != null ? { ...userBody } : undefined;
}
for (const key in params) {
if (acceptedPath.includes(key)) {
if (acceptedBody.includes(key)) {
body = body !== null && body !== void 0 ? body : {};
// @ts-expect-error
body[key] = params[key];
}
else if (acceptedPath.includes(key)) {
continue;
}
else if (key !== 'body') {
// @ts-expect-error
querystring[key] = params[key];

@@ -243,0 +264,0 @@ }

@@ -8,3 +8,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Search a vector tile. Search a vector tile for geospatial values.
* Search a vector tile. Search a vector tile for geospatial values. Before using this API, you should be familiar with the Mapbox vector tile specification. The API returns results as a binary mapbox vector tile. Internally, Elasticsearch translates a vector tile search API request into a search containing: * A `geo_bounding_box` query on the `<field>`. The query uses the `<zoom>/<x>/<y>` tile as a bounding box. * A `geotile_grid` or `geohex_grid` aggregation on the `<field>`. The `grid_agg` parameter determines the aggregation type. The aggregation uses the `<zoom>/<x>/<y>` tile as a bounding box. * Optionally, a `geo_bounds` aggregation on the `<field>`. The search only includes this aggregation if the `exact_bounds` parameter is `true`. * If the optional parameter `with_labels` is `true`, the internal search will include a dynamic runtime field that calls the `getLabelPosition` function of the geometry doc value. This enables the generation of new point features containing suggested geometry labels, so that, for example, multi-polygons will have only one label. For example, Elasticsearch may translate a vector tile search API request with a `grid_agg` argument of `geotile` and an `exact_bounds` argument of `true` into the following search ``` GET my-index/_search { "size": 10000, "query": { "geo_bounding_box": { "my-geo-field": { "top_left": { "lat": -40.979898069620134, "lon": -45 }, "bottom_right": { "lat": -66.51326044311186, "lon": 0 } } } }, "aggregations": { "grid": { "geotile_grid": { "field": "my-geo-field", "precision": 11, "size": 65536, "bounds": { "top_left": { "lat": -40.979898069620134, "lon": -45 }, "bottom_right": { "lat": -66.51326044311186, "lon": 0 } } } }, "bounds": { "geo_bounds": { "field": "my-geo-field", "wrap_longitude": false } } } } ``` The API returns results as a binary Mapbox vector tile. Mapbox vector tiles are encoded as Google Protobufs (PBF). By default, the tile contains three layers: * A `hits` layer containing a feature for each `<field>` value matching the `geo_bounding_box` query. * An `aggs` layer containing a feature for each cell of the `geotile_grid` or `geohex_grid`. The layer only contains features for cells with matching data. * A meta layer containing: * A feature containing a bounding box. By default, this is the bounding box of the tile. * Value ranges for any sub-aggregations on the `geotile_grid` or `geohex_grid`. * Metadata for the search. The API only returns features that can display at its zoom level. For example, if a polygon feature has no area at its zoom level, the API omits it. The API returns errors as UTF-8 encoded JSON. IMPORTANT: You can specify several options for this API as either a query parameter or request body parameter. If you specify both parameters, the query parameter takes precedence. **Grid precision for geotile** For a `grid_agg` of `geotile`, you can use cells in the `aggs` layer as tiles for lower zoom levels. `grid_precision` represents the additional zoom levels available through these cells. The final precision is computed by as follows: `<zoom> + grid_precision`. For example, if `<zoom>` is 7 and `grid_precision` is 8, then the `geotile_grid` aggregation will use a precision of 15. The maximum final precision is 29. The `grid_precision` also determines the number of cells for the grid as follows: `(2^grid_precision) x (2^grid_precision)`. For example, a value of 8 divides the tile into a grid of 256 x 256 cells. The `aggs` layer only contains features for cells with matching data. **Grid precision for geohex** For a `grid_agg` of `geohex`, Elasticsearch uses `<zoom>` and `grid_precision` to calculate a final precision as follows: `<zoom> + grid_precision`. This precision determines the H3 resolution of the hexagonal cells produced by the `geohex` aggregation. The following table maps the H3 resolution for each precision. For example, if `<zoom>` is 3 and `grid_precision` is 3, the precision is 6. At a precision of 6, hexagonal cells have an H3 resolution of 2. If `<zoom>` is 3 and `grid_precision` is 4, the precision is 7. At a precision of 7, hexagonal cells have an H3 resolution of 3. | Precision | Unique tile bins | H3 resolution | Unique hex bins | Ratio | | --------- | ---------------- | ------------- | ----------------| ----- | | 1 | 4 | 0 | 122 | 30.5 | | 2 | 16 | 0 | 122 | 7.625 | | 3 | 64 | 1 | 842 | 13.15625 | | 4 | 256 | 1 | 842 | 3.2890625 | | 5 | 1024 | 2 | 5882 | 5.744140625 | | 6 | 4096 | 2 | 5882 | 1.436035156 | | 7 | 16384 | 3 | 41162 | 2.512329102 | | 8 | 65536 | 3 | 41162 | 0.6280822754 | | 9 | 262144 | 4 | 288122 | 1.099098206 | | 10 | 1048576 | 4 | 288122 | 0.2747745514 | | 11 | 4194304 | 5 | 2016842 | 0.4808526039 | | 12 | 16777216 | 6 | 14117882 | 0.8414913416 | | 13 | 67108864 | 6 | 14117882 | 0.2103728354 | | 14 | 268435456 | 7 | 98825162 | 0.3681524172 | | 15 | 1073741824 | 8 | 691776122 | 0.644266719 | | 16 | 4294967296 | 8 | 691776122 | 0.1610666797 | | 17 | 17179869184 | 9 | 4842432842 | 0.2818666889 | | 18 | 68719476736 | 10 | 33897029882 | 0.4932667053 | | 19 | 274877906944 | 11 | 237279209162 | 0.8632167343 | | 20 | 1099511627776 | 11 | 237279209162 | 0.2158041836 | | 21 | 4398046511104 | 12 | 1660954464122 | 0.3776573213 | | 22 | 17592186044416 | 13 | 11626681248842 | 0.6609003122 | | 23 | 70368744177664 | 13 | 11626681248842 | 0.165225078 | | 24 | 281474976710656 | 14 | 81386768741882 | 0.2891438866 | | 25 | 1125899906842620 | 15 | 569707381193162 | 0.5060018015 | | 26 | 4503599627370500 | 15 | 569707381193162 | 0.1265004504 | | 27 | 18014398509482000 | 15 | 569707381193162 | 0.03162511259 | | 28 | 72057594037927900 | 15 | 569707381193162 | 0.007906278149 | | 29 | 288230376151712000 | 15 | 569707381193162 | 0.001976569537 | Hexagonal cells don't align perfectly on a vector tile. Some cells may intersect more than one vector tile. To compute the H3 resolution for each precision, Elasticsearch compares the average density of hexagonal bins at each resolution with the average density of tile bins at each zoom level. Elasticsearch uses the H3 resolution that is closest to the corresponding geotile density.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/search-vector-tile-api.html | Elasticsearch API documentation}

@@ -11,0 +11,0 @@ */

@@ -8,3 +8,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Get the search shards. Get the indices and shards that a search request would be run against. This information can be useful for working out issues or planning optimizations with routing and shard preferences. When filtered aliases are used, the filter is returned as part of the indices section.
* Get the search shards. Get the indices and shards that a search request would be run against. This information can be useful for working out issues or planning optimizations with routing and shard preferences. When filtered aliases are used, the filter is returned as part of the `indices` section. If the Elasticsearch security features are enabled, you must have the `view_index_metadata` or `manage` index privilege for the target data stream, index, or alias.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/search-shards.html | Elasticsearch API documentation}

@@ -11,0 +11,0 @@ */

@@ -9,3 +9,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

* Run a search with a search template.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/search-template.html | Elasticsearch API documentation}
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/search-template-api.html | Elasticsearch API documentation}
*/

@@ -12,0 +12,0 @@ export default function SearchTemplateApi<TDocument = unknown>(this: That, params?: T.SearchTemplateRequest | TB.SearchTemplateRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.SearchTemplateResponse<TDocument>>;

@@ -8,3 +8,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Run a search. Get search hits that match the query defined in the request. You can provide search queries using the `q` query string parameter or the request body. If both are specified, only the query parameter is used.
* Run a search. Get search hits that match the query defined in the request. You can provide search queries using the `q` query string parameter or the request body. If both are specified, only the query parameter is used. If the Elasticsearch security features are enabled, you must have the read index privilege for the target data stream, index, or alias. For cross-cluster search, refer to the documentation about configuring CCS privileges. To search a point in time (PIT) for an alias, you must have the `read` index privilege for the alias's data streams or indices. **Search slicing** When paging through a large number of documents, it can be helpful to split the search into multiple slices to consume them independently with the `slice` and `pit` properties. By default the splitting is done first on the shards, then locally on each shard. The local splitting partitions the shard into contiguous ranges based on Lucene document IDs. For instance if the number of shards is equal to 2 and you request 4 slices, the slices 0 and 2 are assigned to the first shard and the slices 1 and 3 are assigned to the second shard. IMPORTANT: The same point-in-time ID should be used for all slices. If different PIT IDs are used, slices can overlap and miss documents. This situation can occur because the splitting criterion is based on Lucene document IDs, which are not stable across changes to the index.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/search-search.html | Elasticsearch API documentation}

@@ -11,0 +11,0 @@ */

@@ -11,4 +11,4 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Retrieve node-level cache statistics about searchable snapshots.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/searchable-snapshots-apis.html | Elasticsearch API documentation}
* Get cache statistics. Get statistics about the shared cache for partially mounted indices.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/searchable-snapshots-api-cache-stats.html | Elasticsearch API documentation}
*/

@@ -19,4 +19,4 @@ cacheStats(this: That, params?: T.SearchableSnapshotsCacheStatsRequest | TB.SearchableSnapshotsCacheStatsRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.SearchableSnapshotsCacheStatsResponse>;

/**
* Clear the cache of searchable snapshots.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/searchable-snapshots-apis.html | Elasticsearch API documentation}
* Clear the cache. Clear indices and data streams from the shared cache for partially mounted indices.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/searchable-snapshots-api-clear-cache.html | Elasticsearch API documentation}
*/

@@ -27,3 +27,3 @@ clearCache(this: That, params?: T.SearchableSnapshotsClearCacheRequest | TB.SearchableSnapshotsClearCacheRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.SearchableSnapshotsClearCacheResponse>;

/**
* Mount a snapshot as a searchable index.
* Mount a snapshot. Mount a snapshot as a searchable snapshot index. Do not use this API for snapshots managed by index lifecycle management (ILM). Manually mounting ILM-managed snapshots can interfere with ILM processes.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/searchable-snapshots-api-mount-snapshot.html | Elasticsearch API documentation}

@@ -35,4 +35,4 @@ */

/**
* Retrieve shard-level statistics about searchable snapshots.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/searchable-snapshots-apis.html | Elasticsearch API documentation}
* Get searchable snapshot statistics.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/searchable-snapshots-api-stats.html | Elasticsearch API documentation}
*/

@@ -39,0 +39,0 @@ stats(this: That, params?: T.SearchableSnapshotsStatsRequest | TB.SearchableSnapshotsStatsRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.SearchableSnapshotsStatsResponse>;

@@ -11,4 +11,4 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Removes a node from the shutdown list. Designed for indirect use by ECE/ESS and ECK. Direct use is not supported.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/current | Elasticsearch API documentation}
* Cancel node shutdown preparations. Remove a node from the shutdown list so it can resume normal operations. You must explicitly clear the shutdown request when a node rejoins the cluster or when a node has permanently left the cluster. Shutdown requests are never removed automatically by Elasticsearch. NOTE: This feature is designed for indirect use by Elastic Cloud, Elastic Cloud Enterprise, and Elastic Cloud on Kubernetes. Direct use is not supported. If the operator privileges feature is enabled, you must be an operator to use this API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/delete-shutdown.html | Elasticsearch API documentation}
*/

@@ -19,4 +19,4 @@ deleteNode(this: That, params: T.ShutdownDeleteNodeRequest | TB.ShutdownDeleteNodeRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.ShutdownDeleteNodeResponse>;

/**
* Retrieve status of a node or nodes that are currently marked as shutting down. Designed for indirect use by ECE/ESS and ECK. Direct use is not supported.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/current | Elasticsearch API documentation}
* Get the shutdown status. Get information about nodes that are ready to be shut down, have shut down preparations still in progress, or have stalled. The API returns status information for each part of the shut down process. NOTE: This feature is designed for indirect use by Elasticsearch Service, Elastic Cloud Enterprise, and Elastic Cloud on Kubernetes. Direct use is not supported. If the operator privileges feature is enabled, you must be an operator to use this API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/get-shutdown.html | Elasticsearch API documentation}
*/

@@ -27,4 +27,4 @@ getNode(this: That, params?: T.ShutdownGetNodeRequest | TB.ShutdownGetNodeRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.ShutdownGetNodeResponse>;

/**
* Adds a node to be shut down. Designed for indirect use by ECE/ESS and ECK. Direct use is not supported.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/current | Elasticsearch API documentation}
* Prepare a node to be shut down. NOTE: This feature is designed for indirect use by Elastic Cloud, Elastic Cloud Enterprise, and Elastic Cloud on Kubernetes. Direct use is not supported. If you specify a node that is offline, it will be prepared for shut down when it rejoins the cluster. If the operator privileges feature is enabled, you must be an operator to use this API. The API migrates ongoing tasks and index shards to other nodes as needed to prepare a node to be restarted or shut down and removed from the cluster. This ensures that Elasticsearch can be stopped safely with minimal disruption to the cluster. You must specify the type of shutdown: `restart`, `remove`, or `replace`. If a node is already being prepared for shutdown, you can use this API to change the shutdown type. IMPORTANT: This API does NOT terminate the Elasticsearch process. Monitor the node shutdown status to determine when it is safe to stop Elasticsearch.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/put-shutdown.html | Elasticsearch API documentation}
*/

@@ -31,0 +31,0 @@ putNode(this: That, params: T.ShutdownPutNodeRequest | TB.ShutdownPutNodeRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.ShutdownPutNodeResponse>;

@@ -11,9 +11,9 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Simulates running ingest with example documents.
* Simulate data ingestion. Run ingest pipelines against a set of provided documents, optionally with substitute pipeline definitions, to simulate ingesting data into an index. This API is meant to be used for troubleshooting or pipeline development, as it does not actually index any data into Elasticsearch. The API runs the default and final pipeline for that index against a set of documents provided in the body of the request. If a pipeline contains a reroute processor, it follows that reroute processor to the new index, running that index's pipelines as well the same way that a non-simulated ingest would. No data is indexed into Elasticsearch. Instead, the transformed document is returned, along with the list of pipelines that have been run and the name of the index where the document would have been indexed if this were not a simulation. The transformed document is validated against the mappings that would apply to this index, and any validation error is reported in the result. This API differs from the simulate pipeline API in that you specify a single pipeline for that API, and it runs only that one pipeline. The simulate pipeline API is more useful for developing a single pipeline, while the simulate ingest API is more useful for troubleshooting the interaction of the various pipelines that get applied when ingesting into an index. By default, the pipeline definitions that are currently in the system are used. However, you can supply substitute pipeline definitions in the body of the request. These will be used in place of the pipeline definitions that are already in the system. This can be used to replace existing pipeline definitions or to create new ones. The pipeline substitutions are used only within this request.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/simulate-ingest-api.html | Elasticsearch API documentation}
*/
ingest(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithOutMeta): Promise<T.TODO>;
ingest(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.TODO, unknown>>;
ingest(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptions): Promise<T.TODO>;
ingest(this: That, params: T.SimulateIngestRequest | TB.SimulateIngestRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.SimulateIngestResponse>;
ingest(this: That, params: T.SimulateIngestRequest | TB.SimulateIngestRequest, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.SimulateIngestResponse, unknown>>;
ingest(this: That, params: T.SimulateIngestRequest | TB.SimulateIngestRequest, options?: TransportRequestOptions): Promise<T.SimulateIngestResponse>;
}
export {};

@@ -33,10 +33,24 @@ "use strict";

const acceptedPath = ['index'];
const acceptedBody = ['docs', 'component_template_substitutions', 'index_template_subtitutions', 'mapping_addition', 'pipeline_substitutions'];
const querystring = {};
const body = undefined;
params = params !== null && params !== void 0 ? params : {};
// @ts-expect-error
const userBody = params === null || params === void 0 ? void 0 : params.body;
let body;
if (typeof userBody === 'string') {
body = userBody;
}
else {
body = userBody != null ? { ...userBody } : undefined;
}
for (const key in params) {
if (acceptedPath.includes(key)) {
if (acceptedBody.includes(key)) {
body = body !== null && body !== void 0 ? body : {};
// @ts-expect-error
body[key] = params[key];
}
else if (acceptedPath.includes(key)) {
continue;
}
else if (key !== 'body') {
// @ts-expect-error
querystring[key] = params[key];

@@ -43,0 +57,0 @@ }

@@ -11,3 +11,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Deletes an existing snapshot lifecycle policy.
* Delete a policy. Delete a snapshot lifecycle policy definition. This operation prevents any future snapshots from being taken but does not cancel in-progress snapshots or remove previously-taken snapshots.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/slm-api-delete-policy.html | Elasticsearch API documentation}

@@ -19,3 +19,3 @@ */

/**
* Immediately creates a snapshot according to the lifecycle policy, without waiting for the scheduled time.
* Run a policy. Immediately create a snapshot according to the snapshot lifecycle policy without waiting for the scheduled time. The snapshot policy is normally applied according to its schedule, but you might want to manually run a policy before performing an upgrade or other maintenance.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/slm-api-execute-lifecycle.html | Elasticsearch API documentation}

@@ -27,3 +27,3 @@ */

/**
* Deletes any snapshots that are expired according to the policy's retention rules.
* Run a retention policy. Manually apply the retention policy to force immediate removal of snapshots that are expired according to the snapshot lifecycle policy retention rules. The retention policy is normally applied according to its schedule.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/slm-api-execute-retention.html | Elasticsearch API documentation}

@@ -35,3 +35,3 @@ */

/**
* Retrieves one or more snapshot lifecycle policy definitions and information about the latest snapshot attempts.
* Get policy information. Get snapshot lifecycle policy definitions and information about the latest snapshot attempts.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/slm-api-get-policy.html | Elasticsearch API documentation}

@@ -43,3 +43,3 @@ */

/**
* Returns global and policy-level statistics about actions taken by snapshot lifecycle management.
* Get snapshot lifecycle management statistics. Get global and policy-level statistics about actions taken by snapshot lifecycle management.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/slm-api-get-stats.html | Elasticsearch API documentation}

@@ -51,3 +51,3 @@ */

/**
* Retrieves the status of snapshot lifecycle management (SLM).
* Get the snapshot lifecycle management status.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/slm-api-get-status.html | Elasticsearch API documentation}

@@ -59,3 +59,3 @@ */

/**
* Creates or updates a snapshot lifecycle policy.
* Create or update a policy. Create or update a snapshot lifecycle policy. If the policy already exists, this request increments the policy version. Only the latest version of a policy is stored.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/slm-api-put-policy.html | Elasticsearch API documentation}

@@ -67,3 +67,3 @@ */

/**
* Turns on snapshot lifecycle management (SLM).
* Start snapshot lifecycle management. Snapshot lifecycle management (SLM) starts automatically when a cluster is formed. Manually starting SLM is necessary only if it has been stopped using the stop SLM API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/slm-api-start.html | Elasticsearch API documentation}

@@ -75,3 +75,3 @@ */

/**
* Turns off snapshot lifecycle management (SLM).
* Stop snapshot lifecycle management. Stop all snapshot lifecycle management (SLM) operations and the SLM plugin. This API is useful when you are performing maintenance on a cluster and need to prevent SLM from performing any actions on your data streams or indices. Stopping SLM does not stop any snapshots that are in progress. You can manually trigger snapshots with the run snapshot lifecycle policy API even if SLM is stopped. The API returns a response as soon as the request is acknowledged, but the plugin might continue to run until in-progress operations complete and it can be safely stopped. Use the get snapshot lifecycle management status API to see if SLM is running.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/slm-api-stop.html | Elasticsearch API documentation}

@@ -78,0 +78,0 @@ */

@@ -11,3 +11,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Triggers the review of a snapshot repository’s contents and deletes any stale data not referenced by existing snapshots.
* Clean up the snapshot repository. Trigger the review of the contents of a snapshot repository and delete any stale data not referenced by existing snapshots.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/clean-up-snapshot-repo-api.html | Elasticsearch API documentation}

@@ -19,4 +19,4 @@ */

/**
* Clones indices from one snapshot into another snapshot in the same repository.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/modules-snapshots.html | Elasticsearch API documentation}
* Clone a snapshot. Clone part of all of a snapshot into another snapshot in the same repository.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/clone-snapshot-api.html | Elasticsearch API documentation}
*/

@@ -27,4 +27,4 @@ clone(this: That, params: T.SnapshotCloneRequest | TB.SnapshotCloneRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.SnapshotCloneResponse>;

/**
* Creates a snapshot in a repository.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/modules-snapshots.html | Elasticsearch API documentation}
* Create a snapshot. Take a snapshot of a cluster or of data streams and indices.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/create-snapshot-api.html | Elasticsearch API documentation}
*/

@@ -35,3 +35,3 @@ create(this: That, params: T.SnapshotCreateRequest | TB.SnapshotCreateRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.SnapshotCreateResponse>;

/**
* Creates a repository.
* Create or update a snapshot repository. IMPORTANT: If you are migrating searchable snapshots, the repository name must be identical in the source and destination clusters. To register a snapshot repository, the cluster's global metadata must be writeable. Ensure there are no cluster blocks (for example, `cluster.blocks.read_only` and `clsuter.blocks.read_only_allow_delete` settings) that prevent write access.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/modules-snapshots.html | Elasticsearch API documentation}

@@ -43,4 +43,4 @@ */

/**
* Deletes one or more snapshots.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/modules-snapshots.html | Elasticsearch API documentation}
* Delete snapshots.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/delete-snapshot-api.html | Elasticsearch API documentation}
*/

@@ -51,4 +51,4 @@ delete(this: That, params: T.SnapshotDeleteRequest | TB.SnapshotDeleteRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.SnapshotDeleteResponse>;

/**
* Deletes a repository.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/modules-snapshots.html | Elasticsearch API documentation}
* Delete snapshot repositories. When a repository is unregistered, Elasticsearch removes only the reference to the location where the repository is storing the snapshots. The snapshots themselves are left untouched and in place.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/delete-snapshot-repo-api.html | Elasticsearch API documentation}
*/

@@ -59,4 +59,4 @@ deleteRepository(this: That, params: T.SnapshotDeleteRepositoryRequest | TB.SnapshotDeleteRepositoryRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.SnapshotDeleteRepositoryResponse>;

/**
* Returns information about a snapshot.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/modules-snapshots.html | Elasticsearch API documentation}
* Get snapshot information.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/get-snapshot-api.html | Elasticsearch API documentation}
*/

@@ -67,4 +67,4 @@ get(this: That, params: T.SnapshotGetRequest | TB.SnapshotGetRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.SnapshotGetResponse>;

/**
* Returns information about a repository.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/modules-snapshots.html | Elasticsearch API documentation}
* Get snapshot repository information.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/get-snapshot-repo-api.html | Elasticsearch API documentation}
*/

@@ -75,11 +75,11 @@ getRepository(this: That, params?: T.SnapshotGetRepositoryRequest | TB.SnapshotGetRepositoryRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.SnapshotGetRepositoryResponse>;

/**
* Analyzes a repository for correctness and performance
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/modules-snapshots.html | Elasticsearch API documentation}
* Analyze a snapshot repository. Analyze the performance characteristics and any incorrect behaviour found in a repository. The response exposes implementation details of the analysis which may change from version to version. The response body format is therefore not considered stable and may be different in newer versions. There are a large number of third-party storage systems available, not all of which are suitable for use as a snapshot repository by Elasticsearch. Some storage systems behave incorrectly, or perform poorly, especially when accessed concurrently by multiple clients as the nodes of an Elasticsearch cluster do. This API performs a collection of read and write operations on your repository which are designed to detect incorrect behaviour and to measure the performance characteristics of your storage system. The default values for the parameters are deliberately low to reduce the impact of running an analysis inadvertently and to provide a sensible starting point for your investigations. Run your first analysis with the default parameter values to check for simple problems. If successful, run a sequence of increasingly large analyses until you encounter a failure or you reach a `blob_count` of at least `2000`, a `max_blob_size` of at least `2gb`, a `max_total_data_size` of at least `1tb`, and a `register_operation_count` of at least `100`. Always specify a generous timeout, possibly `1h` or longer, to allow time for each analysis to run to completion. Perform the analyses using a multi-node cluster of a similar size to your production cluster so that it can detect any problems that only arise when the repository is accessed by many nodes at once. If the analysis fails, Elasticsearch detected that your repository behaved unexpectedly. This usually means you are using a third-party storage system with an incorrect or incompatible implementation of the API it claims to support. If so, this storage system is not suitable for use as a snapshot repository. You will need to work with the supplier of your storage system to address the incompatibilities that Elasticsearch detects. If the analysis is successful, the API returns details of the testing process, optionally including how long each operation took. You can use this information to determine the performance of your storage system. If any operation fails or returns an incorrect result, the API returns an error. If the API returns an error, it may not have removed all the data it wrote to the repository. The error will indicate the location of any leftover data and this path is also recorded in the Elasticsearch logs. You should verify that this location has been cleaned up correctly. If there is still leftover data at the specified location, you should manually remove it. If the connection from your client to Elasticsearch is closed while the client is waiting for the result of the analysis, the test is cancelled. Some clients are configured to close their connection if no response is received within a certain timeout. An analysis takes a long time to complete so you might need to relax any such client-side timeouts. On cancellation the analysis attempts to clean up the data it was writing, but it may not be able to remove it all. The path to the leftover data is recorded in the Elasticsearch logs. You should verify that this location has been cleaned up correctly. If there is still leftover data at the specified location, you should manually remove it. If the analysis is successful then it detected no incorrect behaviour, but this does not mean that correct behaviour is guaranteed. The analysis attempts to detect common bugs but it does not offer 100% coverage. Additionally, it does not test the following: * Your repository must perform durable writes. Once a blob has been written it must remain in place until it is deleted, even after a power loss or similar disaster. * Your repository must not suffer from silent data corruption. Once a blob has been written, its contents must remain unchanged until it is deliberately modified or deleted. * Your repository must behave correctly even if connectivity from the cluster is disrupted. Reads and writes may fail in this case, but they must not return incorrect results. IMPORTANT: An analysis writes a substantial amount of data to your repository and then reads it back again. This consumes bandwidth on the network between the cluster and the repository, and storage space and I/O bandwidth on the repository itself. You must ensure this load does not affect other users of these systems. Analyses respect the repository settings `max_snapshot_bytes_per_sec` and `max_restore_bytes_per_sec` if available and the cluster setting `indices.recovery.max_bytes_per_sec` which you can use to limit the bandwidth they consume. NOTE: This API is intended for exploratory use by humans. You should expect the request parameters and the response format to vary in future versions. NOTE: Different versions of Elasticsearch may perform different checks for repository compatibility, with newer versions typically being stricter than older ones. A storage system that passes repository analysis with one version of Elasticsearch may fail with a different version. This indicates it behaves incorrectly in ways that the former version did not detect. You must work with the supplier of your storage system to address the incompatibilities detected by the repository analysis API in any version of Elasticsearch. NOTE: This API may not work correctly in a mixed-version cluster. *Implementation details* NOTE: This section of documentation describes how the repository analysis API works in this version of Elasticsearch, but you should expect the implementation to vary between versions. The request parameters and response format depend on details of the implementation so may also be different in newer versions. The analysis comprises a number of blob-level tasks, as set by the `blob_count` parameter and a number of compare-and-exchange operations on linearizable registers, as set by the `register_operation_count` parameter. These tasks are distributed over the data and master-eligible nodes in the cluster for execution. For most blob-level tasks, the executing node first writes a blob to the repository and then instructs some of the other nodes in the cluster to attempt to read the data it just wrote. The size of the blob is chosen randomly, according to the `max_blob_size` and `max_total_data_size` parameters. If any of these reads fails then the repository does not implement the necessary read-after-write semantics that Elasticsearch requires. For some blob-level tasks, the executing node will instruct some of its peers to attempt to read the data before the writing process completes. These reads are permitted to fail, but must not return partial data. If any read returns partial data then the repository does not implement the necessary atomicity semantics that Elasticsearch requires. For some blob-level tasks, the executing node will overwrite the blob while its peers are reading it. In this case the data read may come from either the original or the overwritten blob, but the read operation must not return partial data or a mix of data from the two blobs. If any of these reads returns partial data or a mix of the two blobs then the repository does not implement the necessary atomicity semantics that Elasticsearch requires for overwrites. The executing node will use a variety of different methods to write the blob. For instance, where applicable, it will use both single-part and multi-part uploads. Similarly, the reading nodes will use a variety of different methods to read the data back again. For instance they may read the entire blob from start to end or may read only a subset of the data. For some blob-level tasks, the executing node will cancel the write before it is complete. In this case, it still instructs some of the other nodes in the cluster to attempt to read the blob but all of these reads must fail to find the blob. Linearizable registers are special blobs that Elasticsearch manipulates using an atomic compare-and-exchange operation. This operation ensures correct and strongly-consistent behavior even when the blob is accessed by multiple nodes at the same time. The detailed implementation of the compare-and-exchange operation on linearizable registers varies by repository type. Repository analysis verifies that that uncontended compare-and-exchange operations on a linearizable register blob always succeed. Repository analysis also verifies that contended operations either succeed or report the contention but do not return incorrect results. If an operation fails due to contention, Elasticsearch retries the operation until it succeeds. Most of the compare-and-exchange operations performed by repository analysis atomically increment a counter which is represented as an 8-byte blob. Some operations also verify the behavior on small blobs with sizes other than 8 bytes.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/repo-analysis-api.html | Elasticsearch API documentation}
*/
repositoryAnalyze(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithOutMeta): Promise<T.TODO>;
repositoryAnalyze(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.TODO, unknown>>;
repositoryAnalyze(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptions): Promise<T.TODO>;
repositoryAnalyze(this: That, params: T.SnapshotRepositoryAnalyzeRequest | TB.SnapshotRepositoryAnalyzeRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.SnapshotRepositoryAnalyzeResponse>;
repositoryAnalyze(this: That, params: T.SnapshotRepositoryAnalyzeRequest | TB.SnapshotRepositoryAnalyzeRequest, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.SnapshotRepositoryAnalyzeResponse, unknown>>;
repositoryAnalyze(this: That, params: T.SnapshotRepositoryAnalyzeRequest | TB.SnapshotRepositoryAnalyzeRequest, options?: TransportRequestOptions): Promise<T.SnapshotRepositoryAnalyzeResponse>;
/**
* Verifies the integrity of the contents of a snapshot repository
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/modules-snapshots.html | Elasticsearch API documentation}
* Verify the repository integrity. Verify the integrity of the contents of a snapshot repository. This API enables you to perform a comprehensive check of the contents of a repository, looking for any anomalies in its data or metadata which might prevent you from restoring snapshots from the repository or which might cause future snapshot create or delete operations to fail. If you suspect the integrity of the contents of one of your snapshot repositories, cease all write activity to this repository immediately, set its `read_only` option to `true`, and use this API to verify its integrity. Until you do so: * It may not be possible to restore some snapshots from this repository. * Searchable snapshots may report errors when searched or may have unassigned shards. * Taking snapshots into this repository may fail or may appear to succeed but have created a snapshot which cannot be restored. * Deleting snapshots from this repository may fail or may appear to succeed but leave the underlying data on disk. * Continuing to write to the repository while it is in an invalid state may causing additional damage to its contents. If the API finds any problems with the integrity of the contents of your repository, Elasticsearch will not be able to repair the damage. The only way to bring the repository back into a fully working state after its contents have been damaged is by restoring its contents from a repository backup which was taken before the damage occurred. You must also identify what caused the damage and take action to prevent it from happening again. If you cannot restore a repository backup, register a new repository and use this for all future snapshot operations. In some cases it may be possible to recover some of the contents of a damaged repository, either by restoring as many of its snapshots as needed and taking new snapshots of the restored data, or by using the reindex API to copy data from any searchable snapshots mounted from the damaged repository. Avoid all operations which write to the repository while the verify repository integrity API is running. If something changes the repository contents while an integrity verification is running then Elasticsearch may incorrectly report having detected some anomalies in its contents due to the concurrent writes. It may also incorrectly fail to report some anomalies that the concurrent writes prevented it from detecting. NOTE: This API is intended for exploratory use by humans. You should expect the request parameters and the response format to vary in future versions. NOTE: This API may not work correctly in a mixed-version cluster.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/verify-repo-integrity-api.html | Elasticsearch API documentation}
*/

@@ -90,4 +90,4 @@ repositoryVerifyIntegrity(this: That, params: T.SnapshotRepositoryVerifyIntegrityRequest | TB.SnapshotRepositoryVerifyIntegrityRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.SnapshotRepositoryVerifyIntegrityResponse>;

/**
* Restores a snapshot.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/modules-snapshots.html | Elasticsearch API documentation}
* Restore a snapshot. Restore a snapshot of a cluster or data streams and indices. You can restore a snapshot only to a running cluster with an elected master node. The snapshot repository must be registered and available to the cluster. The snapshot and cluster versions must be compatible. To restore a snapshot, the cluster's global metadata must be writable. Ensure there are't any cluster blocks that prevent writes. The restore operation ignores index blocks. Before you restore a data stream, ensure the cluster contains a matching index template with data streams enabled. To check, use the index management feature in Kibana or the get index template API: ``` GET _index_template/*?filter_path=index_templates.name,index_templates.index_template.index_patterns,index_templates.index_template.data_stream ``` If no such template exists, you can create one or restore a cluster state that contains one. Without a matching index template, a data stream can't roll over or create backing indices. If your snapshot contains data from App Search or Workplace Search, you must restore the Enterprise Search encryption key before you restore the snapshot.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/restore-snapshot-api.html | Elasticsearch API documentation}
*/

@@ -98,4 +98,4 @@ restore(this: That, params: T.SnapshotRestoreRequest | TB.SnapshotRestoreRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.SnapshotRestoreResponse>;

/**
* Returns information about the status of a snapshot.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/modules-snapshots.html | Elasticsearch API documentation}
* Get the snapshot status. Get a detailed description of the current state for each shard participating in the snapshot. Note that this API should be used only to obtain detailed shard-level information for ongoing snapshots. If this detail is not needed or you want to obtain information about one or more existing snapshots, use the get snapshot API. WARNING: Using the API to return the status of any snapshots other than currently running snapshots can be expensive. The API requires a read from the repository for each shard in each snapshot. For example, if you have 100 snapshots with 1,000 shards each, an API request that includes all snapshots will require 100,000 reads (100 snapshots x 1,000 shards). Depending on the latency of your storage, such requests can take an extremely long time to return results. These requests can also tax machine resources and, when using cloud storage, incur high processing costs.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/get-snapshot-status-api.html | Elasticsearch API documentation}
*/

@@ -106,4 +106,4 @@ status(this: That, params?: T.SnapshotStatusRequest | TB.SnapshotStatusRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.SnapshotStatusResponse>;

/**
* Verifies a repository.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/modules-snapshots.html | Elasticsearch API documentation}
* Verify a snapshot repository. Check for common misconfigurations in a snapshot repository.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/verify-snapshot-repo-api.html | Elasticsearch API documentation}
*/

@@ -110,0 +110,0 @@ verifyRepository(this: That, params: T.SnapshotVerifyRepositoryRequest | TB.SnapshotVerifyRepositoryRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.SnapshotVerifyRepositoryResponse>;

@@ -265,6 +265,5 @@ "use strict";

async repositoryAnalyze(params, options) {
const acceptedPath = ['repository'];
const acceptedPath = ['name'];
const querystring = {};
const body = undefined;
params = params !== null && params !== void 0 ? params : {};
for (const key in params) {

@@ -275,2 +274,3 @@ if (acceptedPath.includes(key)) {

else if (key !== 'body') {
// @ts-expect-error
querystring[key] = params[key];

@@ -280,7 +280,7 @@ }

const method = 'POST';
const path = `/_snapshot/${encodeURIComponent(params.repository.toString())}/_analyze`;
const path = `/_snapshot/${encodeURIComponent(params.name.toString())}/_analyze`;
const meta = {
name: 'snapshot.repository_analyze',
pathParts: {
repository: params.repository
name: params.name
}

@@ -287,0 +287,0 @@ };

@@ -18,3 +18,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Delete an async SQL search. Delete an async SQL search or a stored synchronous SQL search. If the search is still running, the API cancels it.
* Delete an async SQL search. Delete an async SQL search or a stored synchronous SQL search. If the search is still running, the API cancels it. If the Elasticsearch security features are enabled, only the following users can use this API to delete a search: * Users with the `cancel_task` cluster privilege. * The user who first submitted the search.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/delete-async-sql-search-api.html | Elasticsearch API documentation}

@@ -26,3 +26,3 @@ */

/**
* Get async SQL search results. Get the current status and available results for an async SQL search or stored synchronous SQL search.
* Get async SQL search results. Get the current status and available results for an async SQL search or stored synchronous SQL search. If the Elasticsearch security features are enabled, only the user who first submitted the SQL search can retrieve the search using this API.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/get-async-sql-search-api.html | Elasticsearch API documentation}

@@ -48,3 +48,3 @@ */

/**
* Translate SQL into Elasticsearch queries. Translate an SQL search into a search API request containing Query DSL.
* Translate SQL into Elasticsearch queries. Translate an SQL search into a search API request containing Query DSL. It accepts the same request body parameters as the SQL search API, excluding `cursor`.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/sql-translate-api.html | Elasticsearch API documentation}

@@ -51,0 +51,0 @@ */

@@ -136,3 +136,3 @@ "use strict";

const acceptedPath = [];
const acceptedBody = ['catalog', 'columnar', 'cursor', 'fetch_size', 'filter', 'query', 'request_timeout', 'page_timeout', 'time_zone', 'field_multi_value_leniency', 'runtime_mappings', 'wait_for_completion_timeout', 'params', 'keep_alive', 'keep_on_completion', 'index_using_frozen'];
const acceptedBody = ['allow_partial_search_results', 'catalog', 'columnar', 'cursor', 'fetch_size', 'field_multi_value_leniency', 'filter', 'index_using_frozen', 'keep_alive', 'keep_on_completion', 'page_timeout', 'params', 'query', 'request_timeout', 'runtime_mappings', 'time_zone', 'wait_for_completion_timeout'];
const querystring = {};

@@ -139,0 +139,0 @@ // @ts-expect-error

@@ -11,3 +11,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Delete a synonym set.
* Delete a synonym set. You can only delete a synonyms set that is not in use by any index analyzer. Synonyms sets can be used in synonym graph token filters and synonym token filters. These synonym filters can be used as part of search analyzers. Analyzers need to be loaded when an index is restored (such as when a node starts, or the index becomes open). Even if the analyzer is not used on any field mapping, it still needs to be loaded on the index recovery phase. If any analyzers cannot be loaded, the index becomes unavailable and the cluster status becomes red or yellow as index shards are not available. To prevent that, synonyms sets that are used in analyzers can't be deleted. A delete request in this case will return a 400 response code. To remove a synonyms set, you must first remove all indices that contain analyzers using it. You can migrate an index by creating a new index that does not contain the token filter with the synonyms set, and use the reindex API in order to copy over the index data. Once finished, you can delete the index. When the synonyms set is not used in analyzers, you will be able to delete it.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/delete-synonyms-set.html | Elasticsearch API documentation}

@@ -41,3 +41,3 @@ */

* Get all synonym sets. Get a summary of all defined synonym sets.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/list-synonyms-sets.html | Elasticsearch API documentation}
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/get-synonyms-set.html | Elasticsearch API documentation}
*/

@@ -48,3 +48,3 @@ getSynonymsSets(this: That, params?: T.SynonymsGetSynonymsSetsRequest | TB.SynonymsGetSynonymsSetsRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.SynonymsGetSynonymsSetsResponse>;

/**
* Create or update a synonym set. Synonyms sets are limited to a maximum of 10,000 synonym rules per set. If you need to manage more synonym rules, you can create multiple synonym sets.
* Create or update a synonym set. Synonyms sets are limited to a maximum of 10,000 synonym rules per set. If you need to manage more synonym rules, you can create multiple synonym sets. When an existing synonyms set is updated, the search analyzers that use the synonyms set are reloaded automatically for all indices. This is equivalent to invoking the reload search analyzers API for all indices that use the synonyms set.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/put-synonyms-set.html | Elasticsearch API documentation}

@@ -56,3 +56,3 @@ */

/**
* Create or update a synonym rule. Create or update a synonym rule in a synonym set.
* Create or update a synonym rule. Create or update a synonym rule in a synonym set. If any of the synonym rules included is invalid, the API returns an error. When you update a synonym rule, all analyzers using the synonyms set will be reloaded automatically to reflect the new rule.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/put-synonym-rule.html | Elasticsearch API documentation}

@@ -59,0 +59,0 @@ */

@@ -11,3 +11,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Cancels a task, if it can be cancelled through an API.
* Cancel a task. WARNING: The task management API is new and should still be considered a beta feature. The API may change in ways that are not backwards compatible. A task may continue to run for some time after it has been cancelled because it may not be able to safely stop its current activity straight away. It is also possible that Elasticsearch must complete its work on other tasks before it can process the cancellation. The get task information API will continue to list these cancelled tasks until they complete. The cancelled flag in the response indicates that the cancellation command has been processed and the task will stop as soon as possible. To troubleshoot why a cancelled task does not complete promptly, use the get task information API with the `?detailed` parameter to identify the other tasks the system is running. You can also use the node hot threads API to obtain detailed information about the work the system is doing instead of completing the cancelled task.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/tasks.html | Elasticsearch API documentation}

@@ -19,3 +19,3 @@ */

/**
* Get task information. Returns information about the tasks currently executing in the cluster.
* Get task information. Get information about a task currently running in the cluster. WARNING: The task management API is new and should still be considered a beta feature. The API may change in ways that are not backwards compatible. If the task identifier is not found, a 404 response code indicates that there are no resources that match the request.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/tasks.html | Elasticsearch API documentation}

@@ -27,3 +27,3 @@ */

/**
* The task management API returns information about tasks currently executing on one or more nodes in the cluster.
* Get all tasks. Get information about the tasks currently running on one or more nodes in the cluster. WARNING: The task management API is new and should still be considered a beta feature. The API may change in ways that are not backwards compatible. **Identifying running tasks** The `X-Opaque-Id header`, when provided on the HTTP request header, is going to be returned as a header in the response as well as in the headers field for in the task information. This enables you to track certain calls or associate certain tasks with the client that started them. For example: ``` curl -i -H "X-Opaque-Id: 123456" "http://localhost:9200/_tasks?group_by=parents" ``` The API returns the following result: ``` HTTP/1.1 200 OK X-Opaque-Id: 123456 content-type: application/json; charset=UTF-8 content-length: 831 { "tasks" : { "u5lcZHqcQhu-rUoFaqDphA:45" : { "node" : "u5lcZHqcQhu-rUoFaqDphA", "id" : 45, "type" : "transport", "action" : "cluster:monitor/tasks/lists", "start_time_in_millis" : 1513823752749, "running_time_in_nanos" : 293139, "cancellable" : false, "headers" : { "X-Opaque-Id" : "123456" }, "children" : [ { "node" : "u5lcZHqcQhu-rUoFaqDphA", "id" : 46, "type" : "direct", "action" : "cluster:monitor/tasks/lists[n]", "start_time_in_millis" : 1513823752750, "running_time_in_nanos" : 92133, "cancellable" : false, "parent_task_id" : "u5lcZHqcQhu-rUoFaqDphA:45", "headers" : { "X-Opaque-Id" : "123456" } } ] } } } ``` In this example, `X-Opaque-Id: 123456` is the ID as a part of the response header. The `X-Opaque-Id` in the task `headers` is the ID for the task that was initiated by the REST request. The `X-Opaque-Id` in the children `headers` is the child task of the task that was initiated by the REST request.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/tasks.html | Elasticsearch API documentation}

@@ -30,0 +30,0 @@ */

@@ -8,3 +8,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Get terms in an index. Discover terms that match a partial string in an index. This "terms enum" API is designed for low-latency look-ups used in auto-complete scenarios. If the `complete` property in the response is false, the returned terms set may be incomplete and should be treated as approximate. This can occur due to a few reasons, such as a request timeout or a node error. NOTE: The terms enum API may return terms from deleted documents. Deleted documents are initially only marked as deleted. It is not until their segments are merged that documents are actually deleted. Until that happens, the terms enum API will return terms from these documents.
* Get terms in an index. Discover terms that match a partial string in an index. This API is designed for low-latency look-ups used in auto-complete scenarios. > info > The terms enum API may return terms from deleted documents. Deleted documents are initially only marked as deleted. It is not until their segments are merged that documents are actually deleted. Until that happens, the terms enum API will return terms from these documents.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/search-terms-enum.html | Elasticsearch API documentation}

@@ -11,0 +11,0 @@ */

@@ -8,3 +8,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Get term vector information. Get information and statistics about terms in the fields of a particular document.
* Get term vector information. Get information and statistics about terms in the fields of a particular document. You can retrieve term vectors for documents stored in the index or for artificial documents passed in the body of the request. You can specify the fields you are interested in through the `fields` parameter or by adding the fields to the request body. For example: ``` GET /my-index-000001/_termvectors/1?fields=message ``` Fields can be specified using wildcards, similar to the multi match query. Term vectors are real-time by default, not near real-time. This can be changed by setting `realtime` parameter to `false`. You can request three types of values: _term information_, _term statistics_, and _field statistics_. By default, all term information and field statistics are returned for all fields but term statistics are excluded. **Term information** * term frequency in the field (always returned) * term positions (`positions: true`) * start and end offsets (`offsets: true`) * term payloads (`payloads: true`), as base64 encoded bytes If the requested information wasn't stored in the index, it will be computed on the fly if possible. Additionally, term vectors could be computed for documents not even existing in the index, but instead provided by the user. > warn > Start and end offsets assume UTF-16 encoding is being used. If you want to use these offsets in order to get the original text that produced this token, you should make sure that the string you are taking a sub-string of is also encoded using UTF-16. **Behaviour** The term and field statistics are not accurate. Deleted documents are not taken into account. The information is only retrieved for the shard the requested document resides in. The term and field statistics are therefore only useful as relative measures whereas the absolute numbers have no meaning in this context. By default, when requesting term vectors of artificial documents, a shard to get the statistics from is randomly selected. Use `routing` only to hit a particular shard.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/docs-termvectors.html | Elasticsearch API documentation}

@@ -11,0 +11,0 @@ */

@@ -11,17 +11,17 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Finds the structure of a text field in an index.
* Find the structure of a text field. Find the structure of a text field in an Elasticsearch index. This API provides a starting point for extracting further information from log messages already ingested into Elasticsearch. For example, if you have ingested data into a very simple index that has just `@timestamp` and message fields, you can use this API to see what common structure exists in the message field. The response from the API contains: * Sample messages. * Statistics that reveal the most common values for all fields detected within the text and basic numeric statistics for numeric fields. * Information about the structure of the text, which is useful when you write ingest configurations to index it or similarly formatted text. * Appropriate mappings for an Elasticsearch index, which you could use to ingest the text. All this information can be calculated by the structure finder with no guidance. However, you can optionally override some of the decisions about the text structure by specifying one or more query parameters. If the structure finder produces unexpected results, specify the `explain` query parameter and an explanation will appear in the response. It helps determine why the returned structure was chosen.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/find-field-structure.html | Elasticsearch API documentation}
*/
findFieldStructure(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithOutMeta): Promise<T.TODO>;
findFieldStructure(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.TODO, unknown>>;
findFieldStructure(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptions): Promise<T.TODO>;
findFieldStructure(this: That, params: T.TextStructureFindFieldStructureRequest | TB.TextStructureFindFieldStructureRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.TextStructureFindFieldStructureResponse>;
findFieldStructure(this: That, params: T.TextStructureFindFieldStructureRequest | TB.TextStructureFindFieldStructureRequest, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.TextStructureFindFieldStructureResponse, unknown>>;
findFieldStructure(this: That, params: T.TextStructureFindFieldStructureRequest | TB.TextStructureFindFieldStructureRequest, options?: TransportRequestOptions): Promise<T.TextStructureFindFieldStructureResponse>;
/**
* Finds the structure of a list of messages. The messages must contain data that is suitable to be ingested into Elasticsearch.
* Find the structure of text messages. Find the structure of a list of text messages. The messages must contain data that is suitable to be ingested into Elasticsearch. This API provides a starting point for ingesting data into Elasticsearch in a format that is suitable for subsequent use with other Elastic Stack functionality. Use this API rather than the find text structure API if your input text has already been split up into separate messages by some other process. The response from the API contains: * Sample messages. * Statistics that reveal the most common values for all fields detected within the text and basic numeric statistics for numeric fields. * Information about the structure of the text, which is useful when you write ingest configurations to index it or similarly formatted text. Appropriate mappings for an Elasticsearch index, which you could use to ingest the text. All this information can be calculated by the structure finder with no guidance. However, you can optionally override some of the decisions about the text structure by specifying one or more query parameters. If the structure finder produces unexpected results, specify the `explain` query parameter and an explanation will appear in the response. It helps determine why the returned structure was chosen.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/find-message-structure.html | Elasticsearch API documentation}
*/
findMessageStructure(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithOutMeta): Promise<T.TODO>;
findMessageStructure(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.TODO, unknown>>;
findMessageStructure(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptions): Promise<T.TODO>;
findMessageStructure(this: That, params: T.TextStructureFindMessageStructureRequest | TB.TextStructureFindMessageStructureRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.TextStructureFindMessageStructureResponse>;
findMessageStructure(this: That, params: T.TextStructureFindMessageStructureRequest | TB.TextStructureFindMessageStructureRequest, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.TextStructureFindMessageStructureResponse, unknown>>;
findMessageStructure(this: That, params: T.TextStructureFindMessageStructureRequest | TB.TextStructureFindMessageStructureRequest, options?: TransportRequestOptions): Promise<T.TextStructureFindMessageStructureResponse>;
/**
* Finds the structure of a text file. The text file must contain data that is suitable to be ingested into Elasticsearch.
* Find the structure of a text file. The text file must contain data that is suitable to be ingested into Elasticsearch. This API provides a starting point for ingesting data into Elasticsearch in a format that is suitable for subsequent use with other Elastic Stack functionality. Unlike other Elasticsearch endpoints, the data that is posted to this endpoint does not need to be UTF-8 encoded and in JSON format. It must, however, be text; binary text formats are not currently supported. The size is limited to the Elasticsearch HTTP receive buffer size, which defaults to 100 Mb. The response from the API contains: * A couple of messages from the beginning of the text. * Statistics that reveal the most common values for all fields detected within the text and basic numeric statistics for numeric fields. * Information about the structure of the text, which is useful when you write ingest configurations to index it or similarly formatted text. * Appropriate mappings for an Elasticsearch index, which you could use to ingest the text. All this information can be calculated by the structure finder with no guidance. However, you can optionally override some of the decisions about the text structure by specifying one or more query parameters.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/find-structure.html | Elasticsearch API documentation}

@@ -33,3 +33,3 @@ */

/**
* Tests a Grok pattern on some text.
* Test a Grok pattern. Test a Grok pattern on one or more lines of text. The API indicates whether the lines match the pattern together with the offsets and lengths of the matched substrings.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/test-grok-pattern.html | Elasticsearch API documentation}

@@ -36,0 +36,0 @@ */

@@ -35,3 +35,2 @@ "use strict";

const body = undefined;
params = params !== null && params !== void 0 ? params : {};
for (const key in params) {

@@ -42,2 +41,3 @@ if (acceptedPath.includes(key)) {

else if (key !== 'body') {
// @ts-expect-error
querystring[key] = params[key];

@@ -55,10 +55,24 @@ }

const acceptedPath = [];
const acceptedBody = ['messages'];
const querystring = {};
const body = undefined;
params = params !== null && params !== void 0 ? params : {};
// @ts-expect-error
const userBody = params === null || params === void 0 ? void 0 : params.body;
let body;
if (typeof userBody === 'string') {
body = userBody;
}
else {
body = userBody != null ? { ...userBody } : undefined;
}
for (const key in params) {
if (acceptedPath.includes(key)) {
if (acceptedBody.includes(key)) {
body = body !== null && body !== void 0 ? body : {};
// @ts-expect-error
body[key] = params[key];
}
else if (acceptedPath.includes(key)) {
continue;
}
else if (key !== 'body') {
// @ts-expect-error
querystring[key] = params[key];

@@ -65,0 +79,0 @@ }

@@ -88,3 +88,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Upgrades all transforms. This API identifies transforms that have a legacy configuration format and upgrades them to the latest version. It also cleans up the internal data structures that store the transform state and checkpoints. The upgrade does not affect the source and destination indices. The upgrade also does not affect the roles that transforms use when Elasticsearch security features are enabled; the role used to read source data and write to the destination index remains unchanged.
* Upgrade all transforms. Transforms are compatible across minor versions and between supported major versions. However, over time, the format of transform configuration information may change. This API identifies transforms that have a legacy configuration format and upgrades them to the latest version. It also cleans up the internal data structures that store the transform state and checkpoints. The upgrade does not affect the source and destination indices. The upgrade also does not affect the roles that transforms use when Elasticsearch security features are enabled; the role used to read source data and write to the destination index remains unchanged. If a transform upgrade step fails, the upgrade stops and an error is returned about the underlying issue. Resolve the issue then re-run the process again. A summary is returned when the upgrade is finished. To ensure continuous transforms remain running during a major version upgrade of the cluster – for example, from 7.16 to 8.0 – it is recommended to upgrade transforms before upgrading the cluster. You may want to perform a recent cluster backup prior to the upgrade.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/upgrade-transforms.html | Elasticsearch API documentation}

@@ -91,0 +91,0 @@ */

@@ -9,3 +9,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

* Throttle an update by query operation. Change the number of requests per second for a particular update by query operation. Rethrottling that speeds up the query takes effect immediately but rethrotting that slows down the query takes effect after completing the current batch to prevent scroll timeouts.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/docs-update-by-query.html | Elasticsearch API documentation}
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/docs-update-by-query.html#docs-update-by-query-rethrottle | Elasticsearch API documentation}
*/

@@ -12,0 +12,0 @@ export default function UpdateByQueryRethrottleApi(this: That, params: T.UpdateByQueryRethrottleRequest | TB.UpdateByQueryRethrottleRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.UpdateByQueryRethrottleResponse>;

@@ -8,3 +8,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Update documents. Updates documents that match the specified query. If no query is specified, performs an update on every document in the data stream or index without modifying the source, which is useful for picking up mapping changes.
* Update documents. Updates documents that match the specified query. If no query is specified, performs an update on every document in the data stream or index without modifying the source, which is useful for picking up mapping changes. If the Elasticsearch security features are enabled, you must have the following index privileges for the target data stream, index, or alias: * `read` * `index` or `write` You can specify the query criteria in the request URI or the request body using the same syntax as the search API. When you submit an update by query request, Elasticsearch gets a snapshot of the data stream or index when it begins processing the request and updates matching documents using internal versioning. When the versions match, the document is updated and the version number is incremented. If a document changes between the time that the snapshot is taken and the update operation is processed, it results in a version conflict and the operation fails. You can opt to count version conflicts instead of halting and returning by setting `conflicts` to `proceed`. Note that if you opt to count version conflicts, the operation could attempt to update more documents from the source than `max_docs` until it has successfully updated `max_docs` documents or it has gone through every document in the source query. NOTE: Documents with a version equal to 0 cannot be updated using update by query because internal versioning does not support 0 as a valid version number. While processing an update by query request, Elasticsearch performs multiple search requests sequentially to find all of the matching documents. A bulk update request is performed for each batch of matching documents. Any query or update failures cause the update by query request to fail and the failures are shown in the response. Any update requests that completed successfully still stick, they are not rolled back. **Throttling update requests** To control the rate at which update by query issues batches of update operations, you can set `requests_per_second` to any positive decimal number. This pads each batch with a wait time to throttle the rate. Set `requests_per_second` to `-1` to turn off throttling. Throttling uses a wait time between batches so that the internal scroll requests can be given a timeout that takes the request padding into account. The padding time is the difference between the batch size divided by the `requests_per_second` and the time spent writing. By default the batch size is 1000, so if `requests_per_second` is set to `500`: ``` target_time = 1000 / 500 per second = 2 seconds wait_time = target_time - write_time = 2 seconds - .5 seconds = 1.5 seconds ``` Since the batch is issued as a single _bulk request, large batch sizes cause Elasticsearch to create many requests and wait before starting the next set. This is "bursty" instead of "smooth". **Slicing** Update by query supports sliced scroll to parallelize the update process. This can improve efficiency and provide a convenient way to break the request down into smaller parts. Setting `slices` to `auto` chooses a reasonable number for most data streams and indices. This setting will use one slice per shard, up to a certain limit. If there are multiple source data streams or indices, it will choose the number of slices based on the index or backing index with the smallest number of shards. Adding `slices` to `_update_by_query` just automates the manual process of creating sub-requests, which means it has some quirks: * You can see these requests in the tasks APIs. These sub-requests are "child" tasks of the task for the request with slices. * Fetching the status of the task for the request with `slices` only contains the status of completed slices. * These sub-requests are individually addressable for things like cancellation and rethrottling. * Rethrottling the request with `slices` will rethrottle the unfinished sub-request proportionally. * Canceling the request with slices will cancel each sub-request. * Due to the nature of slices each sub-request won't get a perfectly even portion of the documents. All documents will be addressed, but some slices may be larger than others. Expect larger slices to have a more even distribution. * Parameters like `requests_per_second` and `max_docs` on a request with slices are distributed proportionally to each sub-request. Combine that with the point above about distribution being uneven and you should conclude that using `max_docs` with `slices` might not result in exactly `max_docs` documents being updated. * Each sub-request gets a slightly different snapshot of the source data stream or index though these are all taken at approximately the same time. If you're slicing manually or otherwise tuning automatic slicing, keep in mind that: * Query performance is most efficient when the number of slices is equal to the number of shards in the index or backing index. If that number is large (for example, 500), choose a lower number as too many slices hurts performance. Setting slices higher than the number of shards generally does not improve efficiency and adds overhead. * Update performance scales linearly across available resources with the number of slices. Whether query or update performance dominates the runtime depends on the documents being reindexed and cluster resources. **Update the document source** Update by query supports scripts to update the document source. As with the update API, you can set `ctx.op` to change the operation that is performed. Set `ctx.op = "noop"` if your script decides that it doesn't have to make any changes. The update by query operation skips updating the document and increments the `noop` counter. Set `ctx.op = "delete"` if your script decides that the document should be deleted. The update by query operation deletes the document and increments the `deleted` counter. Update by query supports only `index`, `noop`, and `delete`. Setting `ctx.op` to anything else is an error. Setting any other field in `ctx` is an error. This API enables you to only modify the source of matching documents; you cannot move them.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/docs-update-by-query.html | Elasticsearch API documentation}

@@ -11,0 +11,0 @@ */

@@ -8,3 +8,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Update a document. Updates a document by running a script or passing a partial document.
* Update a document. Update a document by running a script or passing a partial document. If the Elasticsearch security features are enabled, you must have the `index` or `write` index privilege for the target index or index alias. The script can update, delete, or skip modifying the document. The API also supports passing a partial document, which is merged into the existing document. To fully replace an existing document, use the index API. This operation: * Gets the document (collocated with the shard) from the index. * Runs the specified script. * Indexes the result. The document must still be reindexed, but using this API removes some network roundtrips and reduces chances of version conflicts between the GET and the index operation. The `_source` field must be enabled to use this API. In addition to `_source`, you can access the following variables through the `ctx` map: `_index`, `_type`, `_id`, `_version`, `_routing`, and `_now` (the current timestamp).
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/docs-update.html | Elasticsearch API documentation}

@@ -11,0 +11,0 @@ */

@@ -11,3 +11,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Acknowledges a watch, manually throttling the execution of the watch's actions.
* Acknowledge a watch. Acknowledging a watch enables you to manually throttle the execution of the watch's actions. The acknowledgement state of an action is stored in the `status.actions.<id>.ack.state` structure. IMPORTANT: If the specified watch is currently being executed, this API will return an error The reason for this behavior is to prevent overwriting the watch status from a watch execution. Acknowledging an action throttles further executions of that action until its `ack.state` is reset to `awaits_successful_execution`. This happens when the condition of the watch is not met (the condition evaluates to false).
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/watcher-api-ack-watch.html | Elasticsearch API documentation}

@@ -19,3 +19,3 @@ */

/**
* Activates a currently inactive watch.
* Activate a watch. A watch can be either active or inactive.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/watcher-api-activate-watch.html | Elasticsearch API documentation}

@@ -27,3 +27,3 @@ */

/**
* Deactivates a currently active watch.
* Deactivate a watch. A watch can be either active or inactive.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/watcher-api-deactivate-watch.html | Elasticsearch API documentation}

@@ -35,3 +35,3 @@ */

/**
* Removes a watch from Watcher.
* Delete a watch. When the watch is removed, the document representing the watch in the `.watches` index is gone and it will never be run again. Deleting a watch does not delete any watch execution records related to this watch from the watch history. IMPORTANT: Deleting a watch must be done by using only this API. Do not delete the watch directly from the `.watches` index using the Elasticsearch delete document API When Elasticsearch security features are enabled, make sure no write privileges are granted to anyone for the `.watches` index.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/watcher-api-delete-watch.html | Elasticsearch API documentation}

@@ -43,3 +43,3 @@ */

/**
* This API can be used to force execution of the watch outside of its triggering logic or to simulate the watch execution for debugging purposes. For testing and debugging purposes, you also have fine-grained control on how the watch runs. You can execute the watch without executing all of its actions or alternatively by simulating them. You can also force execution by ignoring the watch condition and control whether a watch record would be written to the watch history after execution.
* Run a watch. This API can be used to force execution of the watch outside of its triggering logic or to simulate the watch execution for debugging purposes. For testing and debugging purposes, you also have fine-grained control on how the watch runs. You can run the watch without running all of its actions or alternatively by simulating them. You can also force execution by ignoring the watch condition and control whether a watch record would be written to the watch history after it runs. You can use the run watch API to run watches that are not yet registered by specifying the watch definition inline. This serves as great tool for testing and debugging your watches prior to adding them to Watcher. When Elasticsearch security features are enabled on your cluster, watches are run with the privileges of the user that stored the watches. If your user is allowed to read index `a`, but not index `b`, then the exact same set of rules will apply during execution of a watch. When using the run watch API, the authorization data of the user that called the API will be used as a base, instead of the information who stored the watch.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/watcher-api-execute-watch.html | Elasticsearch API documentation}

@@ -51,10 +51,10 @@ */

/**
* Retrieve settings for the watcher system index
* Get Watcher index settings. Get settings for the Watcher internal index (`.watches`). Only a subset of settings are shown, for example `index.auto_expand_replicas` and `index.number_of_replicas`.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/watcher-api-get-settings.html | Elasticsearch API documentation}
*/
getSettings(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithOutMeta): Promise<T.TODO>;
getSettings(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.TODO, unknown>>;
getSettings(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptions): Promise<T.TODO>;
getSettings(this: That, params?: T.WatcherGetSettingsRequest | TB.WatcherGetSettingsRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.WatcherGetSettingsResponse>;
getSettings(this: That, params?: T.WatcherGetSettingsRequest | TB.WatcherGetSettingsRequest, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.WatcherGetSettingsResponse, unknown>>;
getSettings(this: That, params?: T.WatcherGetSettingsRequest | TB.WatcherGetSettingsRequest, options?: TransportRequestOptions): Promise<T.WatcherGetSettingsResponse>;
/**
* Retrieves a watch by its ID.
* Get a watch.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/watcher-api-get-watch.html | Elasticsearch API documentation}

@@ -66,3 +66,3 @@ */

/**
* Creates a new watch, or updates an existing one.
* Create or update a watch. When a watch is registered, a new document that represents the watch is added to the `.watches` index and its trigger is immediately registered with the relevant trigger engine. Typically for the `schedule` trigger, the scheduler is the trigger engine. IMPORTANT: You must use Kibana or this API to create a watch. Do not add a watch directly to the `.watches` index by using the Elasticsearch index API. If Elasticsearch security features are enabled, do not give users write privileges on the `.watches` index. When you add a watch you can also define its initial active state by setting the *active* parameter. When Elasticsearch security features are enabled, your watch can index or search only on indices for which the user that stored the watch has privileges. If the user is able to read index `a`, but not index `b`, the same will apply when the watch runs.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/watcher-api-put-watch.html | Elasticsearch API documentation}

@@ -74,3 +74,3 @@ */

/**
* Retrieves stored watches.
* Query watches. Get all registered watches in a paginated manner and optionally filter watches by a query. Note that only the `_id` and `metadata.*` fields are queryable or sortable.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/watcher-api-query-watches.html | Elasticsearch API documentation}

@@ -82,3 +82,3 @@ */

/**
* Starts Watcher if it is not already running.
* Start the watch service. Start the Watcher service if it is not already running.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/watcher-api-start.html | Elasticsearch API documentation}

@@ -90,3 +90,3 @@ */

/**
* Retrieves the current Watcher metrics.
* Get Watcher statistics. This API always returns basic metrics. You retrieve more metrics by using the metric parameter.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/watcher-api-stats.html | Elasticsearch API documentation}

@@ -98,3 +98,3 @@ */

/**
* Stops Watcher if it is running.
* Stop the watch service. Stop the Watcher service if it is running.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/watcher-api-stop.html | Elasticsearch API documentation}

@@ -106,9 +106,9 @@ */

/**
* Update settings for the watcher system index
* Update Watcher index settings. Update settings for the Watcher internal index (`.watches`). Only a subset of settings can be modified. This includes `index.auto_expand_replicas` and `index.number_of_replicas`.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/watcher-api-update-settings.html | Elasticsearch API documentation}
*/
updateSettings(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithOutMeta): Promise<T.TODO>;
updateSettings(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.TODO, unknown>>;
updateSettings(this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptions): Promise<T.TODO>;
updateSettings(this: That, params?: T.WatcherUpdateSettingsRequest | TB.WatcherUpdateSettingsRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.WatcherUpdateSettingsResponse>;
updateSettings(this: That, params?: T.WatcherUpdateSettingsRequest | TB.WatcherUpdateSettingsRequest, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.WatcherUpdateSettingsResponse, unknown>>;
updateSettings(this: That, params?: T.WatcherUpdateSettingsRequest | TB.WatcherUpdateSettingsRequest, options?: TransportRequestOptions): Promise<T.WatcherUpdateSettingsResponse>;
}
export {};

@@ -188,2 +188,3 @@ "use strict";

else if (key !== 'body') {
// @ts-expect-error
querystring[key] = params[key];

@@ -224,3 +225,3 @@ }

const acceptedPath = ['id'];
const acceptedBody = ['actions', 'condition', 'input', 'metadata', 'throttle_period', 'transform', 'trigger'];
const acceptedBody = ['actions', 'condition', 'input', 'metadata', 'throttle_period', 'throttle_period_in_millis', 'transform', 'trigger'];
const querystring = {};

@@ -371,10 +372,25 @@ // @ts-expect-error

const acceptedPath = [];
const acceptedBody = ['index.auto_expand_replicas', 'index.number_of_replicas'];
const querystring = {};
const body = undefined;
// @ts-expect-error
const userBody = params === null || params === void 0 ? void 0 : params.body;
let body;
if (typeof userBody === 'string') {
body = userBody;
}
else {
body = userBody != null ? { ...userBody } : undefined;
}
params = params !== null && params !== void 0 ? params : {};
for (const key in params) {
if (acceptedPath.includes(key)) {
if (acceptedBody.includes(key)) {
body = body !== null && body !== void 0 ? body : {};
// @ts-expect-error
body[key] = params[key];
}
else if (acceptedPath.includes(key)) {
continue;
}
else if (key !== 'body') {
// @ts-expect-error
querystring[key] = params[key];

@@ -381,0 +397,0 @@ }

@@ -11,3 +11,3 @@ import { Transport, TransportRequestOptions, TransportRequestOptionsWithMeta, TransportRequestOptionsWithOutMeta, TransportResult } from '@elastic/transport';

/**
* Provides general information about the installed X-Pack features.
* Get information. The information provided by the API includes: * Build information including the build number and timestamp. * License information about the currently installed license. * Feature information for the features that are currently enabled and available under the current license.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/info-api.html | Elasticsearch API documentation}

@@ -19,3 +19,3 @@ */

/**
* This API provides information about which features are currently enabled and available under the current license and some usage statistics.
* Get usage information. Get information about the features that are currently enabled and available under the current license. The API also provides some usage statistics.
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.17/usage-api.html | Elasticsearch API documentation}

@@ -22,0 +22,0 @@ */

@@ -162,3 +162,11 @@ "use strict";

if (options.enableMetaHeader) {
options.headers['x-elastic-client-meta'] = `es=${clientVersion},js=${nodeVersion},t=${transportVersion},hc=${nodeVersion}`;
let clientMeta = `es=${clientVersion},js=${nodeVersion},t=${transportVersion}`;
if (options.Connection === transport_1.UndiciConnection) {
clientMeta += `,un=${nodeVersion}`;
}
else {
// assumes HttpConnection
clientMeta += `,hc=${nodeVersion}`;
}
options.headers['x-elastic-client-meta'] = clientMeta;
}

@@ -165,0 +173,0 @@ this.name = options.name;

{
"name": "@elastic/elasticsearch",
"version": "8.17.0",
"versionCanary": "8.17.0-canary.0",
"version": "8.17.1",
"versionCanary": "8.17.1-canary.0",
"description": "The official Elasticsearch client for Node.js",

@@ -59,3 +59,3 @@ "main": "./index.js",

"devDependencies": {
"@elastic/request-converter": "8.16.1",
"@elastic/request-converter": "8.17.0",
"@sinonjs/fake-timers": "github:sinonjs/fake-timers#0bfffc1",

@@ -62,0 +62,0 @@ "@types/debug": "4.1.12",

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is not supported yet

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc