I'm using filebeat, logstash and kibana.
I see an issue in Kibana which looks like the logstash grok pattern was not applied for some of the log lines. I don't see an issue in the log lines even, both filtered and unfiltered log lines looks similar. I've added a sample below.
My log file looks like this (1st line was not filtered and second line was filtered properly)
2017-08-08 23:57:40.625+0000 | INFO | CONVEN_XXXXXX | PROD | ae92e5992cf7a5e7 | bca227c9d4848566 | 9778 | [http-nio-5000-exec-6] | c.v.c.a.product.ProductServiceImpl | Returning Product Reviews Response
2017-08-08 23:57:40.625+0000 | INFO | CONVEN_XXXXXX | PROD | ae92e5992cf7a5e7 | bca227c9d4848566 | 9778 | [http-nio-5000-exec-6] | c.v.c.infrastructure.RestClient | GET XXXXProductReviews Response : {"data":[],"totalCount":0,"dateStamp":"2017-08-08T16:57:40+0000","errorCodes":[],"success":true,"vmid":"331024"}
My Logstash configuration is
input {
beats {
port => 5044
codec => multiline {
pattern => "(^%{TIMESTAMP_ISO8601})"
negate => true
what => "previous"
}
}
}
filter {
if [type] == "mixlog" {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} \| %{LOGLEVEL:loglevel} \| %{DATA:module} \| %{DATA:environment} \| %{DATA:traceid} \| %{DATA:spanid} \| %{DATA:processid} \| \[%{DATA:thread}\] \| %{DATA:class} \| %{GREEDYDATA:message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
overwrite => [ "message" ]
}
date {
match => ["timestamp", "yyyy-MM-dd HH:mm:ss.SSSZ", "ISO8601"]
}
}
}
output {
amazon_es {
hosts => ["xxxxxxxx.xxxxxx.xxxxxxxx.com"]
region => "us-east-2"
}
}
# test elasticsearch:
# curl -X GET 'https://xxxxxxxx.xxxxxx.xxxxxxxx.com/logstash-2017.05.25/_search?pretty&q=response:200'
This is how the 1st line looks in Kibana. I dont even see the beat properties (beat.hostname, beat.version,etc,).
This is how the 2nd line looks in Kibana. This is the expected behavior.
Are there any configuration issues? Please help me!