Syslog input plugin from Logstash, how to configure in Elastic agent?

I have about 2000 Elastic agents (version 8.9.0) connected to a system with 3 Fleet servers (version 8.9.0).

We have about 20 different agent policies, because the various Elastic agents are sending
slightly different logs, and for certain cases we need to specify specific pipelines to process the logs.

Is it possible to configure the syslog input plugin
using the elastic agent policy?

Or do I need to configure syslog input plugin in a different way?

I have only configured elastic agent, and never configured logstash directly.

Hi Craig,

Logstash input plugins can't be configured in an Elastic Agent policy.
You need to configure in the usual ways, editing Logstash's pipeline files, ( .conf ) or if you have xpack you can use Central Pipeline Management from Kibana.

I noticed this page was recently added:

Does this mean that:

  - syslog:
      field: message

can be added to the processor section
of an Elastic Agent policy to send data via syslog?

Yes, it looks like a new processor that can be added to an agent policy.

It looks like this was added to beats last year in

by @taylor-swanson .

What advantage does using this processor in the elastic-agent/beats offer?

I am investigating if it is possible to send syslog data from a host directly to elasticsearch without running elastic-agent/beats on an end host.

In earlier versions of elastic-agent (version 8.4.2), for scenarios with very high load scenarios, the elastic-agent would become unresponsize with zombie processes. Maybe this is improved in elastic-agent (version 8.9.1)?

For my use-case, I don't see a lot of value in the syslog processor, because I still need to run the elastic-agent on the end host.

Am I understanding this correctly, or is there something that I am not understanding properly?


Hi Craig,

The syslog processor detaches the syslog parsing functionality from whatever Filebeat input is being used. Prior to this, only the syslog input was available, which meant you were forced to use either TCP, UDP, or a unix socket as means for consuming syslog messages. Now that the syslog processor is available, any input can be used, it is only a matter of passing the syslog message data to the processor.

This does mean that you must still use either Elastic Agent or Filebeat if you want to use this particular syslog processor.


Does the syslog-processor accept syslog input from over TCP, UDP, unix domain socket?
Would this be done through elastic-agent?

I would like to send messages from syslogd, and then use this processor to ship the messages to Elastic. It's not clear to me from the documentation for this processor where the input comes from.

In the documentation you have this information.

Elastic Agent processors are lightweight processing components that you can use to parse, filter, transform, and enrich data at the source.

And also this one that tells where you configure the processor.

The processor does not exist alone, it is part of some integration and is executed on the elastic agent that is running that integration.

For example, If you want to receive syslog data using TCP, then you will need to create a Custom TCP logs integrations and configure the syslog-processor in this integration.

The syslog processor itself doesn't handle external input. A filebeat input would sit in front of the chain and would read in syslog messages from some source (tcp, udp, file, etc) and would then pass it to the syslog processor. As Leandro mentioned, in the context of Elastic Agent, the input/processor chain will be part of an existing integration such as the Custom TCP or Custom UDP logs integration, syslog parsing just needs to be enabled when configuring the integration.

Leandro and Taylor,

Thanks for your replies. The information you have provided is accurate and very useful!

For the syslog processor, would it be OK to add some explanation in:

with what you have mentioned, and cross reference things like filebeat input, Custom TCP or Custom UDP logs integration?

I can even take a whack at submitting a docs PR to that page of what I think clarifies the concepts for
me, and that PR can be evaluated/refined with feedback.

We can certainly add additional context to the documentation. I'll write up an issue to track that work (we are also going to be removing the "experimental" flag from the processor as part of that change). You can track the work being done here: Remove experimental tag from syslog processor and improve documentation · Issue #36416 · elastic/beats · GitHub

I'll try to get a PR up for that soon (link for the PR will appear in the issue linked above).

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.