
Security News
AI Agent Lands PRs in Major OSS Projects, Targets Maintainers via Cold Outreach
An AI agent is merging PRs into major OSS projects and cold-emailing maintainers to drum up more work.
elasticdump2
Advanced tools
Tools for moving and saving indicies.

(local)
npm install elasticdump
./bin/elasticdump
(global)
npm install elasticdump -g
elasticdump
elasticdump works by sending an input to an output. Both can be either an elasticsearch URL or a File.
Elasticsearch:
{protocol}://{host}:{port}/{index}http://127.0.0.1:9200/my_indexFile:
{FilePath}/Users/evantahler/Desktop/dump.jsonStdio:
$You can then do things like:
# Copy an index from production to staging with mappings:
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=http://staging.es.com:9200/my_index \
--type=mapping
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=http://staging.es.com:9200/my_index \
--type=data
# Backup index data to a file:
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=/data/my_index_mapping.json \
--type=mapping
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=/data/my_index.json \
--type=data
# Backup and index to a gzip using stdout:
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=$ \
| gzip > /data/my_index.json.gz
# Backup ALL indices, then use Bulk API to populate another ES cluster:
elasticdump \
--all=true \
--input=http://production-a.es.com:9200/ \
--output=/data/production.json
elasticdump \
--bulk=true \
--input=/data/production.json \
--output=http://production-b.es.com:9200/
# Backup the results of a query to a file
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=query.json \
--searchBody '{"query":{"term":{"username": "admin"}}}'
If Elasticsearch is not being served from the root directory the --input-index and
--output-index are required. If they are not provided, the additional sub-directories will
be parsed for index and type.
Elasticsearch:
{protocol}://{host}:{port}/{sub}/{directory...}http://127.0.0.1:9200/api/search# Copy a single index from a elasticsearch:
elasticdump \
--input=http://es.com:9200/api/search \
--input-index=my_index \
--output=http://es.com:9200/api/search \
--output-index=my_index \
--type=mapping
# Copy a single type:
elasticdump \
--input=http://es.com:9200/api/search \
--input-index=my_index/my_type \
--output=http://es.com:9200/api/search \
--output-index=my_index \
--type=mapping
# Copy a single type:
elasticdump \
--input=http://es.com:9200/api/search \
--input-index=my_index/my_type \
--output=http://es.com:9200/api/search \
--output-index=my_index \
--type=mapping
# Backup ALL indices, then use Bulk API to populate another ES cluster:
# Notice, the single `/` is required to specify all indices.
elasticdump \
--all=true \
--input=http://production-a.es.com:9200/api/search \
--input-index=/ \
--output=/data/production.json
elasticdump \
--bulk=true \
--input=/data/production.json \
--output=http://production-b.es.com:9200/api/search \
--output-index=/
If you prefer using docker to use elasticdump, you can clone this git repo and run :
docker build -t elasticdump .
Then you can use it just by :
docker run --rm -ti elasticdumplocalhost or 127.0.0.1 as you ES host ;)-v <your dumps dir>:<your mount point> to your docker containerExample:
# Copy an index from production to staging with mappings:
docker run --rm -ti elasticdump \
--input=http://production.es.com:9200/my_index \
--output=http://staging.es.com:9200/my_index \
--type=mapping
docker run --rm -ti elasticdump \
--input=http://production.es.com:9200/my_index \
--output=http://staging.es.com:9200/my_index \
--type=data
# Backup index data to a file (ie : stored in /tmp/myESdumps) :
docker run --rm -ti -v /tmp/myESdumps:/data elasticdump \
--input=http://production.es.com:9200/my_index \
--output=/data/my_index_mapping.json \
--type=mapping
docker run --rm -ti -v /tmp/myESdumps:/data elasticdump \
--input=http://production.es.com:9200/my_index \
--output=/data/my_index.json \
--type=data
Usage: elasticdump --input [SOURCE] --output [DESTINATION] [OPTIONS]
--input
Source location (required)
--input-index
Source index and type
(default: all, example: index/type)
--output
Destination location (required)
--output-index
Destination index and type
(default: all, example: index/type)
--limit
How many objects to move in bulk per operation
(default: 100)
--debug
Display the elasticsearch commands being used
(default: false)
--type
What are we exporting?
(default: data, options: [data, mapping])
--delete
Delete documents one-by-one from the input as they are
moved. Will not delete the source index
(default: false)
--searchBody
Preform a partial extract based on search results
(when ES is the input,
default: '{"query": { "match_all": {} } }')
--sourceOnly
Output only the json contained within the document _source
Normal: {"_index":"","_type":"","_id":"", "_source":{SOURCE}}
sourceOnly: {SOURCE}
default: false
--jsonLines
Do not include leading '[', trailing ']' and separating ',' chararacters in output
Note: Most useful in conjunction with sourceOnly to create a file of a single JSON entry per line
default: false
--all
Load/store documents from ALL indexes
(default: false)
--bulk
Leverage elasticsearch Bulk API when writing documents
(default: false)
--ignore-errors
Will continue the read/write loop on write error
(default: false)
--scrollTime
Time the nodes will hold the requested search in order.
(default: 10m)
--maxSockets
How many simultaneous HTTP requests can we process make?
(default:
5 [node <= v0.10.x] /
Infinity [node >= v0.11.x] )
--bulk-use-output-index-name
Force use of destination index name (the actual output URL)
as destination while bulk writing to ES. Allows
leveraging Bulk API copying data inside the same
elasticsearch instance.
(default: false)
--timeout
Integer containing the number of milliseconds to wait for
a request to respond before aborting the request. Passed
directly to the request library. If used in bulk writing,
it will result in the entire batch not being written.
Mostly used when you don't care too much if you lose some
data when importing but rather have speed.
--skip
Integer containing the number of rows you wish to skip
ahead from the input transport. When importing a large
index, things can go wrong, be it connectivity, crashes,
someone forgetting to `screen`, etc. This allows you to
start the dump again from the last known line written (as
logged by the `offset` in the output). Please be advised
that since no sorting is specified when the dump is
initially created, there's no real way to guarantee that
the skipped rows have already been written/parsed. This is
more of an option for when you want to get most data as
possible in the index without concern for losing some rows
in the process, similar to the `timeout` option.
--inputTransport
Provide a custom js file to us as the input transport
--outputTransport
Provide a custom js file to us as the output transport
--toLog
When using a custom outputTransport, should log lines
be appended to the output stream?
(default: true, except for `$`)
--help
This page
Elasticsearch provides a scan and scroll method to fetch all documents of an index. This method is much safer to use since it will maintain the result set in cache for the given period of time. This means it will be a lot faster to export the data and more important it will keep the result set in order. While dumping the result set in batches it won't export duplicate documents in the export. All documents in the export will unique and therefore no missing documents.
NOTE: only works for output
--input="http://localhost:9200/index") or a type of object from that index (--input="http://localhost:9200/index/type"). This requires ElasticSearch 1.2.0 or higher--input="http://localhost:9200/sub/directory --input-index=index/type"). Using --input-index=/ will include all indices and types.put method to write objects. This means new objects will be created and old objects with the same ID will be updatedfile transport will overwrite any existing files--input=http://name:password@production.es.com:9200/my_index--output=$), you can also request a more human-readable output with --format=human--output=$), all logging output will be suppressed--bulk option, aliases will be ignored and the documents you write will be linked thier original index name. For example if you have an alias "events" which contains "events-may-2015" and "events-june-2015" and you bulk dump from one ES cluster to another elasticdump --bulk --import http://localhost:9200/events --output http://other-server:9200, you will have the source indicies, "events-may-2015" and "events-june-2015", and not "events".Inspired by https://github.com/crate/elasticsearch-inout-plugin and https://github.com/jprante/elasticsearch-knapsack
Built at TaskRabbit
FAQs
import and export tools for elasticsearch
The npm package elasticdump2 receives a total of 20 weekly downloads. As such, elasticdump2 popularity was classified as not popular.
We found that elasticdump2 demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
An AI agent is merging PRs into major OSS projects and cold-emailing maintainers to drum up more work.

Research
/Security News
Chrome extension CL Suite by @CLMasters neutralizes 2FA for Facebook and Meta Business accounts while exfiltrating Business Manager contact and analytics data.

Security News
After Matplotlib rejected an AI-written PR, the agent fired back with a blog post, igniting debate over AI contributions and maintainer burden.