Ensuring order for syslog events

Hello,

I have a problem trying with ordering of syslog events once they enter ElasticSearch.

The problem is that the syslog daemon only has a resolution to the second, so the file itself is in correct order but there is no way to order them correctly in Elasticsearch.

Is there any way to send somekind of tiebreaker to ensure order using filebeat, even if it is the line number?

.thro

This is a good question, maybe a workaround would be to use the date processor in the ingest node to create a new date based on the extracted date which has the second resolution and use the offset or part of the offset as the nanosecond resolution?

Yes, had that idea. But I read somewhere that you have to make sure that Filebeat is only running with one thread to ensure that it will be sent in the right order, and I can't guarantee that nobody will change the clients in the future or any other relevant settings along the way.

If Filebeat could parse a custom key (date with a res. of a second and no year in my case) and add tiebreaker number with order of appearance then I'd say that we had a lossless solution on our hands. At least content wise.

Which begs the question, is it possible to write a custom parser for a line before it is sent?

Concerning the one thread even with that, a network or multiple workers could affect the ordering or events. I wonder if using LS directly might be the solution here.

Well Logstash is handling parsing the timestamp correctly for now, but I think the safest choice is to do it on the device itself.

Does Logstash have any notion for the concept of past events?

I gave a bit more thinking about and searched a bit about what other did and I've seen this answer from a colleague, it might be the solution here.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.