Elasticsearch Data Import Tool
see faq
Install Nodejs if you haven’t already.
Node.js version 11.10 or higher is required.
Then install the package globally:
npm i -g duckimport
oryarn global add duckimport
duckimport <command>
You can see available options with duckimport --help
Usage: duckimport [options]
Options:
-c, --config <path> config file path
-i, --inline <configString> base64 encoded config object
-h, --help output usage information
Examples:
$ duckimport -c ./config.json
$ duckimport -i NDJjNGVx........GZzZGY=
duckimport -c ./config.json
duckimport -i ewogICAgIm.....KfQ==
An example config file:
{
"client": {
"node": "http://localhost:9200"
},
"file": "bigFile.csv",
"separator": ",",
"columns": [
"firstname",
"lastname"
],
"lines": 10000,
"createNewIndex": true,
"index": {
"index": "peopleIndex",
"body": {
"settings": {
"number_of_replicas": 0,
"auto_expand_replicas": false
},
"mappings": {
"properties": {
"firstname": {
"type": "keyword"
},
"lastname": {
"type": "keyword"
}
}
}
}
}
}
config
You can pass a config file using duckimport -c <config file path>
or
duckimport -i <base64 encoded config object>
client
file
separator
columns
lines
createNewIndex
index
duckimport process your files in any size line by line and send them into Elasticsearch as chunks. Thanks to nexline
lps
mean?Lines Per Seconds. Represents how many lines of your file is processing in a second,
You can use inline base64 encoded config string using -i
flag. All you need is prepare your config object(json or js object and encode it using base64. duckimport will decode the encoded string and process it.
Duck icon made by Freepik from http://www.flaticon.com/