Hi,
I have a JSON file as input for Logstash, and it looks as follows:
[
{
"_index": "packets-2016-10-06",
"_type": "pcap_file",
"_score": null,
"_source": {
"layers": {
"frame": {
"frame.number": "1",
"frame.len": "215"
},
"eth": {
"eth.dst": {
"eth.dst_resolved": "Broadcast",
"eth.addr": "ff:ff:ff:ff:ff:ff"
},
"eth.src": {
"eth.src_resolved": "value",
"eth.addr": "ff:ff:ff:ff:ff:ff"
}
}
}
}
},
{
"_index": "packets-2016-10-06",
"_type": "pcap_file",
"_score": null,
"_source": {
"layers": {
"frame": {
"frame.number": "2",
"frame.len": "214"
},
"eth": {
"eth.dst": {
"eth.dst_resolved": "Broadcast",
"eth.addr": "ff:ff:ff:ff:ff:ff"
},
"eth.src": {
"eth.src_resolved": "value",
"eth.addr": "ff:ff:ff:ff:ff:ff"
}
}
}
}
}]
However, in kibana, in the message field, I get the content of the whole line, of each line of the JSON file. What should I do in order to have these JSON fields available in the Kibana fields?
What I really want to have is, in this example, only two events (each beginning in the "_index" field) with all the fields inside this event (that is, frame.number, frame.len, eth.dst_resolved, etc). Is it possible?
If needed, here is the pipeline I am using:
input {
file {
path => "C:\Users\NOTEBOOKRIC\Desktop\esse.txt"
type => "json"
codec => "json"
start_position => "beginning"
}
}
filter {
json{
source => "message"
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
}
stdout { codec => rubydebug }
}
Thank you.