Logstash config not creating an index in elasticsearch

Hey elastic team, ive been trying to write a filter for json files so that it gets indexed into elasticsearch but its not creating an index in elastic search. There is no error in the logstash config file.
This is a sample of json file :
{"asctime": "2019-08-13 06:17:45,132", "filename": "taskinstance.py", "lineno": 616, "levelname": "INFO", "message": "Dependencies all met for <TaskInstance: tutorial.print_date 2019-08-13T06:17:05.595788+00:00 [queued]>", "dag_id": "tutorial", "task_id": "print_date", "execution_date": "2019_08_13T06_06_05_595788", "try_number": "1"}
{"asctime": "2019-08-13 06:17:45,132", "filename": "taskinstance.py", "lineno": 834, "levelname": "INFO", "message": "\n--------------------------------------------------------------------------------", "dag_id": "tutorial", "task_id": "print_date", "execution_date": "2019_08_13T06_06_05_595788", "try_number": "1"}
{"asctime": "2019-08-13 06:17:45,132", "filename": "taskinstance.py", "lineno": 835, "levelname": "INFO", "message": "Starting attempt 1 of 2", "dag_id": "tutorial", "task_id": "print_date", "execution_date": "2019_08_13T06_06_05_595788", "try_number": "1"}
{"asctime": "2019-08-13 06:17:45,132", "filename": "taskinstance.py", "lineno": 836, "levelname": "INFO", "message": "\n--------------------------------------------------------------------------------", "dag_id": "tutorial", "task_id": "print_date", "execution_date": "2019_08_13T06_06_05_595788", "try_number": "1"}

There are multiple json files that has the same format.
This is my config file:
input
{
file
{
codec => multiline
{
pattern => '^{'
negate => true
what => previous
}
path => [“/Users/raks/Documents/test.json"]
start_position => "beginning"
sincedb_path => "/dev/null"
exclude => "*.gz"
}
}

filter 
{
mutate

{
    replace => [ "message", "%{message}}" ]
    gsub => [ 'message','\n','']
}
if [message] =~ /^{.*}$/ 
{
    json { source => message }
}

}
output {
elasticsearch {
  hosts => [ “elasticsearch:9200" ]
  index => "rakstest"
}
}

Any help would be really appreciated. Thanks!!

The output i desire is that the fields present in the JSON should be indexed in elastic search so that i can visualise that in kibana.
So I basically want asctime,filename,lineno,levelname,message,dag_id,task_id,execution_date,try_number as fields in kibana!

The input file you show has a complete JSON object on each line, so you can remove the multiline codec and the mutate. All you need is a json filter.

"execution_date" => "2019_08_13T06_06_05_595788",
       "task_id" => "print_date",
      "filename" => "taskinstance.py",
        "lineno" => 616,
    "try_number" => "1",
        "dag_id" => "tutorial",
     "levelname" => "INFO",
       "asctime" => "2019-08-13 06:17:45,132"

Hey @Badger! Thank you for the response! I have few doubts regarding this, the JSON file contains multiple documents and ive provided just a sample, will all the lines in the json file get considered? and also i need the message values also from the json

Hey @Badger is this what is required to be changed in the config file?

Config file:

input{
file{
 type => "json"
 path => "/Users/rakshith.bhat/Documents/*.json"
 start_position => "beginning"
 sincedb_path => "/dev/null"
    }
}
filter{
    json{
        source => "message"
    }
}
output {
elasticsearch{
hosts => "elasticsearch:9200"
index => "jsontest"
}
stdout { codec => rubydebug }
}

That looks right to me.

Thanks @Badger! The solution works!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.