Hi there!
I'm trying to ingest a json file from AWS s3 to ElasticSearch.
I've created an index with the mapping I need, but Logstash adds @timestamp
, @version
and puts the whole json file into message
. How can I get only the message field content (ingest the raw json).
The thing is, if I add this logstash fields to the index mapping, I won't be able to use message's fields in the way I want to use them in Kibana as they would be "nested_json"
type.
This is my logstash.conf:
input {
s3 {
access_key_id => "XXX"
secret_access_key => "XXXXX"
region => "eu-west-1"
bucket => "XXXXX"
interval => "10"
}
}
output {
elasticsearch {
hosts => ["XXXXX"]
index => "indiceprueba1"
user => "XXXXX"
password => "XXXXX"
}
}
My index mapping looks like this:
PUT indiceprueba1
{
"mappings": {
"properties": {
"callid": {
"type": "text"
},
"cdidcli": {
"type": "text"
},
"calldate": {
"type": "date",
"format": "year_month_day"
},
"conversation": {
"type": "text"
},
"text_spk1": {
"type": "text",
"term_vector": "yes",
"fielddata": true,
"store": true
},
"text_spk2": {
"type": "text",
"term_vector": "yes",
"fielddata": true,
"store": true
}
}
}
}
I hope everything was explained clearly.
Thank you!
Brian