I want to ingest csv data but the string data output a double quotation.
Below is an example of my data:
2019-02-14 16:10:19,"Mike","Foster","M","24"
Here's the pipeline
PUT _ingest/pipeline/sample
{
"description" : "Sample Pipeline",
"processors" : [
{
"grok" : {
"field" : "message",
"patterns" : ["%{DATA:logtime},%{DATA:first_name},%{DATA:last_name},%{DATA:age}"]
}
}
]
}
and Here is the output:
"_source": {
"logtime": "\"2019-02-27T12:32:33.768Z\"",
"first_name": "\"Mike\"",
"last_name": "\"Foster\"",
"age": "\"24\""
}
I want to git rid of the double quote.
TIA
dadoonet
(David Pilato)
February 27, 2019, 5:38pm
2
May be run with \"
in your grok filter.
Better to use btw the dissect processor. Should be faster: https://www.elastic.co/guide/en/elasticsearch/reference/current/dissect-processor.html
Also have a look at the CSV processor : https://github.com/johtani/elasticsearch-ingest-csv
Thanks for the input @dadoonet .
I already tried the \"
but I encountered an error.
I'll try the other 2. thanks
dadoonet
(David Pilato)
March 4, 2019, 10:46am
4
Could you share what you did?
A full _simulate
example would be helpful to help you.
See for example: Ingest pipline - multiple fields processed by one porcessor - #7 by dadoonet
system
(system)
Closed
April 1, 2019, 10:56am
5
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.