Hi,
I'm struggling to get my head around getting my elasticsearch data into such a format that will allow me to view fields within log messages (To get a better idea of what I want to do, I would like to be able to view haproxy and nginx messages in a similar fashion to those shown at minute 4 of the awesome @timroes Kibana5 intro video here: https://www.youtube.com/watch?v=mMhnGjp8oOI )
I have read many articles/documentation which kind of tell me but I'm still not sure I'm on the right track.
I am working on a centralised log server that gets logs from approx 80 remote servers. They get there through the use of rsyslog (and hence; each line has the ubiquitous "date" "server" "program" header added).
On the logging server itself, I have filebeat scanning these logs and passing data to logstash for forwarding to elasticsearch. My ELK stack is v7.4.2
So, after enabling the haproxy and nginx modules (and excluding these file types from the standard logs in filebeat.yml), I restarted filebeat only to find that I still didn't have search results which included the field names for the various parts of these messages (eg IP, response etc) in Kibana. After refreshing the Fields via the management page, I can see some of the ones I expected as "hidden fields" in the "Available Fields" pane but none of these are included inside the messages returned by a search.
I do understand that logstash can only work with a few of the filebeat modules (and not haproxy and nginx) so I am looking into writing a grok filter to do this which leads me to my actual question.
My question(s): Am I right in thinking that the reason the fields are not being "broken out" is that the syslog header is still attached and therefore logstash can't break the message into its parts before it is included in the elasticsearch index? Do I therefore need to remove the syslog "header" before sending the rest of the message line to elasticsearch? Apologies for the complete newbie questions.