Confused about accessing "fields" in Kibana

Hi,
I'm struggling to get my head around getting my elasticsearch data into such a format that will allow me to view fields within log messages (To get a better idea of what I want to do, I would like to be able to view haproxy and nginx messages in a similar fashion to those shown at minute 4 of the awesome @timroes Kibana5 intro video here: https://www.youtube.com/watch?v=mMhnGjp8oOI )
I have read many articles/documentation which kind of tell me but I'm still not sure I'm on the right track.
I am working on a centralised log server that gets logs from approx 80 remote servers. They get there through the use of rsyslog (and hence; each line has the ubiquitous "date" "server" "program" header added).
On the logging server itself, I have filebeat scanning these logs and passing data to logstash for forwarding to elasticsearch. My ELK stack is v7.4.2

So, after enabling the haproxy and nginx modules (and excluding these file types from the standard logs in filebeat.yml), I restarted filebeat only to find that I still didn't have search results which included the field names for the various parts of these messages (eg IP, response etc) in Kibana. After refreshing the Fields via the management page, I can see some of the ones I expected as "hidden fields" in the "Available Fields" pane but none of these are included inside the messages returned by a search.

I do understand that logstash can only work with a few of the filebeat modules (and not haproxy and nginx) so I am looking into writing a grok filter to do this which leads me to my actual question.

My question(s): Am I right in thinking that the reason the fields are not being "broken out" is that the syslog header is still attached and therefore logstash can't break the message into its parts before it is included in the elasticsearch index? Do I therefore need to remove the syslog "header" before sending the rest of the message line to elasticsearch? Apologies for the complete newbie questions.

Hi

So the filebeat message parsing should work, if filebeat would ship these logs directly from you servers. since filebeat is watching the logs that rsyslog produces, this logs are different to the format that is expected for the module parsing. So removing the headers via rsyslog might work, in any case you have to make sure, that the format is identical to the original format that's being shipped. There is another problem you might encounter, you'd also want the origin of the log messages, the IPs in Kibana. I'm not sure if this is possible with this architecture. You could evaluate an alternative architectures:

Use Logstash to watch and parse the files that rsyslog writes. Then you're very flexible. But of course it's more effort

Use Logstash to receive the syslogs, there's a syslog input for that, just wanted to point that out

Or you might install filebeat on your servers, and could directly forward those logs to Elasticsearch, skipping rsyslog (or forward to logstash, which could write also to you filesystem if that's a requirement)

There are many possible ways to succeed here.

Happy to help if you've got more questions (no way these are newbie question, and if it were, we're all newbies from time to time, and we're leaving the newbie state by asking questions :slight_smile:

Best,
Matthias

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.