ELK without logstash

Hi,

My team and me are currenlty working on a project to allow our client to view some dashboards in Kibana and we would like to get some elements to validate the architecture we define.

We would like to know at which level does the categorization of events perform ? We are in possession of a RSyslog server and we don't know if it is mandatory to implement LogStash between our centralization and Elastic Search storage ?

Thanks by advance for your response

Hello Yanis,

Although we are not using RSyslog we are sending data from syslog to Elastic. For this, we have configured LogStash to work as a Syslog server and configured syslog to send to this logstash server. In Logstash, the data is parsed using Grok patterns and then sent to ElasticSearch.
An alternative would be to configure syslog to write to a file, configure FileBeat to read this file and send it directly to ElasticSearch. ElasticSearch could then parse the data using an ingest pipeline.

So, depending on the usecase you do not need the LogStash server.

Best regards
Wolfram

2 Likes

Hello Wolfram,

Thanks a lot for your response, it's pretty clear.

I understand that the module Filebeat could be an alternative of Logstash and the system of writing then reading files would do the job of Logstash's Grok.

Could you please provide me more details about this file ? Are you talking about the Log file ?

In our case, we have implemented a RSyslog server in order to be independent of the analysis solutions we use after (ELK fonctionnalities, or maybe a SIEM solution) and be sure the collect of Logs from devices is not impact by the configuration of these technologies.

Thank you,
Yanis

I will say use logstash, it will give you more flexibility to digest different log and you can use more pattern matching.

Hello Yanis,

Yes, I mean a logfile. Some applications already write logfiles while also writing to syslog so you may use that logfile directly or Rsyslog would have to write all the incoming syslogs to a new logfile.
This file would have to be read by filebeat which then sends the data to ElasticSearch.
In ElasticSearch the Ingest Pipelines have similar features to logstash like Grok, csv, ...

Regarding the discussion of LogStash vs Ingest pipelines:
I agree that LogStash seem to have more features than an ingest pipeline(We do not use ingest pipelines at all) but it is a fundamental decision to use that:
Using LogStash as a Syslog server which receives the data directly from syslog and sends it to ElasticSearch - what happens if LogStash is down? Most syslog sender do not support retries(like the Log4j syslog sender).
Using FileBeat to read logfiles and send it to logstash for parsing which sends it to ElasticSearch introduces one more server application which must be hosted, supported and is one more possible failure point.

If you already use LogStash? Fine, then you can use that.
If you do not already have LogStash and the Ingest pipelines have everything you need? Why introducing more complexity?

Best regards
Wolfram

Hello Wolfram,

Thanks, it gives an interesting overview of the logic. However, after some researches about that subject it seems easier to configure "GROKs" in Logstash than the .yaml files in Filebeat agent.

I am conscient that it brings more complexity into our infrastructure but it seems that we would have more open-source datas to achieve the configuration and more flexibility thanks to using Logstash.

During the project, we are going to deal with several kind of events : Network switches, Databases Server, Firewalls, ... I am note sure that LogStash adds more complexity in that typical case ?

With kind regards,
Yanis ALBIK

logstash will definitely will give you more option. i have different log and it works fine. now a days you have many open source thing that we must try and test

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.