System logs with ISO8601 timestamps

when using ISO8601 timestamps with filebeat, I am getting this error:

Provided Grok expressions do not match field value: [2018-05-08T22:50:15.322108-05:00 webhost sshd[24868]: Failed password for root from XX.7.26.60 port 34452 ssh2]

this is in all of the system related logs "/var/log/messages" and "/var/log/secure"
I've done some testing and it appears that if I disable ISO8601 timestamps it does work without the grok error.
I'm not quite sure just how to get filebeat to read in everything with these timestamps, has anyone had any success in doing this? or are ISO8601 timestamps even supported with filebeat?

Filebeat doesn't do the grokking, do you mean this is in Logstash?

I have filebeat set to send directly to elasticsearch. I'm not using logstash in this I assumed that the pipeline in for system was where it was doing this.
from what I see apache logs look like they go in well, just system does not.

filtering is only done by logstash.

I'm not sure I understand then. how are my apache logs coming through just fine without logstash?
is there a document I can review to understand this better?
I just don't understand how just sending filebeat directly into elasticsearch works when its apache logs, but with my iso8601 timestamps it fails with system stuff. I don't have logstash in place with these, though somehow it appears to work, at least with apache logs.

humm... looking at the documentation it says you can do what I'm doing...
item #2

so I've done a bit of digging and found this:
it appears that logstash will update elasticsearch with the the pipeline that is in /usr/share/logstash/modules/xxx/xxx/pipeline.json
they can be queried here: curl http://localhost:9200/_ingest/pipeline/
this will list everything.
I found that filebeat creates pipelines in the format like filebeat-6.2.4-system-syslog-pipeline
you are able to modify this by posting to the pipeline.

what I read is that in using the ingest-agent plugin to elasticsearch it is able to index and grok these.

does having multiple pipelines that appear to fulfill the same purpose, like syslog, create issues? it appears I have 2 pipelines for syslog one for version 6.2.3 the other for 6.2.4. Both groks are roughly the same.
an example of one of the syslog ones would be this:

"filebeat-6.2.4-system-syslog-pipeline":{"processors":[{"grok":{"ignore_missing":true,"field":"message","patterns":["%{SYSLOGTIMESTAMP:system.syslog.timestamp} %{SYSLOGHOST:system.syslog.hostname} %{DATA:system.syslog.program}(?:\[%{}\])?: %{GREEDYMULTILINE:system.syslog.message}","%{SYSLOGTIMESTAMP:system.syslog.timestamp} %{GREEDYMULTILINE:system.syslog.message}"],"pattern_definitions":{"GREEDYMULTILINE":"(.|\n)*"}}},{"remove":{"field":"message"}},{"date":{"field":"system.syslog.timestamp","target_field":"@timestamp","formats":["ISO8601","MMM d HH:mm:ss","MMM dd HH:mm:ss"],"ignore_failure":true}}],"on_failure":[{"set":{"field":"error.message","value":"{{ _ingest.on_failure_message }}"}}],"description":"Pipeline for parsing Syslog messages."}

ok, I finally figured it out..

I had to remove the syslog pipeline in elasticsearch, and replace it.

curl -X DELETE "http://localhost:9200/_ingest/pipeline/filebeat-6.2.4-system-syslog-pipeline

then take what was there, and replacing all of the "SYSLOGTIMESTAMP" with "TIMESTAMP_ISO8601" also adding ISO8601 to timestamp formats.
here is what I had posted to it to get it to work.

curl -X Put "localhost:9200/_ingest/pipeline/filebeat-6.2.4-system-syslog-pipeline" -H 'Content-Type: application/json' -d'{"processors":[{"grok":{"ignore_missing":true,"field":"message","patterns":["%{TIMESTAMP_ISO8601:system.syslog.timestamp} %{SYSLOGHOST:system.syslog.hostname} %{DATA:system.syslog.program}(?:\[%{}\])?: %{GREEDYMULTILINE:system.syslog.message}","%{TIMESTAMP_ISO8601:system.syslog.timestamp} %{GREEDYMULTILINE:system.syslog.message}"],"pattern_definitions":{"GREEDYMULTILINE":"(.|\n)*"}}},{"remove":{"field":"message"}},{"date":{"field":"system.syslog.timestamp","target_field":"@timestamp","formats":["ISO8601","MMM d HH:mm:ss","MMM dd HH:mm:ss"],"ignore_failure":true}}],"on_failure":[{"set":{"field":"error.message","value":"{{ _ingest.on_failure_message }}"}}],"description":"Pipeline for parsing Syslog messages."}'

so... going against what had been stated earlier in this by the people who had responded... yes you can go directly from filebeat into elasticsearch without logstash.
I only am posting this so that anyone else who may run across this and have the same issue I had does not get lead in the wrong direction.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.