Trouble with ESXI filters in logstash

hello, I'm having an issue with esxi filters in logstash, getting the grok error even after following the current documentation, here is my config and the error

Two general comments that may or not be related to your problem but should nevertheless be addressed:

  • Logstash complains about two obsolete configuration settings that you're using. Address them.
  • What's the %{GREEDYDATA:esxi_message}|%{GREEDYDATA} at the end of the expression supposed to mean?

To debug the grok expression, start simple at first:


Does that work? Good, add the next token:

(?:%{SYSLOGTIMESTAMP:timestamp}|%{TIMESTAMP_ISO8601:timestamp8601}) (?:%{SYSLOGHOST:logsource})

Continue until you've found the culprit.

You haven't told us what your input log lines look like to it's impossible to give more specific help.

It seems like the issue was that we had a "type" line, inside the grog field. It seems to have fixed our issue at the moment

We have another problem. We are currently running some logfiles through the ELK stack, with this configuration in logstash , getting this output (in JSON format) in Kibana

We are wondering why the timestamp and host shown aren't the ones in the syslog message, but instead it shows the hostname of my VM and the time it was indexed.

You're not getting the right hostname because you're not extracting the hostname from the message in the grok expression so Logstash defaults the host field to the current host (debian8x64).

You're extracting the timestamp directly from the syslog message to @timestamp but I'm not sure that's kosher. At least nowadays the @timestamp field is expected to be a LogStash::Timestamp object and not a string. I suggest you extract the timestamp to a field that you feed to the date filter so that you're guaranteed to get a compatible value in @timestamp.