Simple example wanted: read and output json files to elasticsearch

Hi! Basic stuff but cannot get this to work.
I have a directory which contains bunch of json files (and new ones arriving every 10mins). The json files need to be read from dir and outputted to elastic.
I want to read each json file and ouput as a new document in elastic. E.g. file 1234567.json would end up in elastic index "myindex" with document id "1234567". (The doc content should be the whole json from file). And I want to use samplePollTime as timefield in es/kibana.

Logstash current conf:
input {
file {
path => "/data-IN/*"
sincedb_path => "/dev/null"
ignore_older => 0
}
}

output {
stdout { codec => rubydebug }
elasticsearch {
hosts => ["localhost:9200"]
document_id => "%{doc_id}"
index => "myindex"
}
}

But stdout codec however show the json files processed as line by line? Broken into smaller objects with timestamp added into every contentN ?
{
"host" => "myhost",
"path" => "data-IN",
"@timestamp" => 2018-01-13T15:28:18.306Z,
"@version" => "1",
"message" => " "content1": {"
}

So it is breaking the content into smaller objects.

Files are named like epochtimestamp.json and contain:
{
"content1": {
"value": 29.697
},
"content2": {
"ip": "10.10.10.10"
},
"content3": {
"host": "somehost.domain.ex"
},
"samplePollTime": "2018-01-13T14:58:05.352Z",
"notes": "more notes here"
}

Basically what I am saying is that Logstash seems to create a timestamped event from each object.key. And I don't want that. I need to have single document with timefield of samplePollTime.
(Doing badly describing this, hopefully you understand it :-))

The file input is normally line-oriented, so if you want to slurp the whole file at once you need to use a multiline codec. Examples of this has been posted in this past. Then, use a json filter to parse the JSON payload.

1 Like

Ok, thanks!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.