I want to upload a single CSV file to Elasticsearch using just the Kibana UI. The file should go through a pipeline and the index needs custom settings. I am currently doing this with a complicated manual process and wondering if the process can be streamlined.
I have this data csv file on my local machine.
color,size,description
blue,1.5,the cat is happy
red,2.8,the dogs are sad
yellow,3.4,the 2 birds are sleepy
I want to upload it to an index with the following settings.
{
"settings": {
"analysis": {
"analyzer": {
"default": {
"tokenizer": "standard",
"filter": [
"lowercase",
"stemmer",
"stop"
]
}
}
}
},
"mappings": {
"properties": {
"color": {
"type": "keyword"
},
"size": {
"type": "double"
},
"description": {
"type": "text"
}
}
}
Note that the "description" field is type text
and I specify a custom default tokenizer.
I want to ingest the data through the following pipeline.
{
"description": "Test Ingestion Pipeline",
"processors": [
{
"trim": {
"field": "color",
"ignore_missing": true
}
},
{
"trim": {
"field": "description",
"ignore_missing": true
}
}
]
}
I don't have Logstash or Beats set up, and I don't want to use the Python client. I'd like to do everything through the Kibana UI.
Here is what I currently do.
- Use the Dev console to create a
test
index with the above custom configuration. - Use the Dev console to create a
test-pipeline
with the above configuration. - Use the Upload File integration to upload the CSV file from my local machine to an index called
test-input
. - Use the Dev console to reindex
test-input
intotest
using thetest-pipeline
pipeline.
I then delete test-input
and work with test
.
This works but it is complicated and error-prone. How can I make the process simpler?