I want to parse the below-given JSON data through logstash and load it to the KIbana
Data
{"PhoneNumber":{"0":27817768541},"TimeStamp":{"0":1471433451000},"ncalls":{"0":20}}
can anyone help me with the logstash config file?
I want to parse the below-given JSON data through logstash and load it to the KIbana
Data
{"PhoneNumber":{"0":27817768541},"TimeStamp":{"0":1471433451000},"ncalls":{"0":20}}
can anyone help me with the logstash config file?
What have you tried so far?
input {
file {
start_position => beginning
path => ["C:/Users/akshay.patil/Desktop/test.log"]
codec => "json"
}
}
filter {
grok {
match => ["message", "[%{WORD}:%{LOGLEVEL}] %{TIMESTAMP_ISO8601:tstamp} :: %{GREEDYDATA:msg}"]
}
json{
source => "message"
target => "parsedJson"
}
mutate {
add_field => {
"PhoneNumber" => "%{[parsedJson][PhoneNumber]}"
"TimeStamp" => "%{[parsedJson][TimeStamp]}"
"NumberOfCalls" => "%{[parsedJson][ncalls]}"
}
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
hosts => ["localhost:9200"]
index => "logstash-%{+YYYY.MM.dd}"
}
}
What does a full line in the file you are processing look like?
The recommended way to create a config is to remove the Elasticsearch output and just output to stdout, then start with a minimal config, e.g. file input with json codec, and inspect the result. Then add filter after filter until complete, while continuously inspecting how the format of the data changes.
How will the index get created?
Once you have completed the configuration and the events look like they should, you switch from the stdout plugin to the elasticsearch plugin.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.