Syslog-ng + Logstash + Splunk

Hi ,

I am writing a Log collector for that I used "syslog-ng + Logstash + Splunk ". Along with these I want to use the Message broker or queue . Then my system will be "syslog-ng + Redis or RabitMQ+Logstash + Splunk. I gone through the web search and Logstash documents to finalize which one I have to use along with the Logstash either Redis or RabitMQ.

My requirement is as below.
Step 1: Syslog-ng is installed in the multiple servers(1 to n-1) .
Step 2: Logstash and Splunk Forwarder is installed in the another server.
Step 3: Logstash will read the log messages which is sent by syslog-ng.
Step 4: Splunk forwarder will forward the logs to the Splunk server.

Note: In between the step 1 and step2 I want to use the message broker or queue . So I am in confusion which one I have to use either Redis or RabitMQ.
Q) Which(Redis or RabitMQ) the best feasible with the Logstash?

You can use either. Which one you choose depends on the requirements on your solution. Given that Logstash now supports persistent queues you may be able to avoid a message queue.

1 Like

Thanks for your help.

Thanks for the replay,

What I understand is " I can use a Redis because Logstash now supports persistant queues",

So now I am planning to use the Redis with the Logstash instead of RabbitMQ.

You may not need to use a separate message queue at all as Logstash now can buffer events internally, which is often why Redis is introduced.

If you still want to use a message queue, the choice depends on what you expect it to add to the architecture. Redis is very fast but is primarily an in-memory queue, which means messages can be lost on failure. RabbitMQ is slower but can, as far as I recall, persist data to disk and provide stronger delivery guarantees. Be sure to investigate the differences and configuration options before making a decision.

1 Like

Thanks for the help.

Newer syslog-ng versions (3.8+) can also buffer messages

Thanks , for the help. I will check the implementation .

Hi,

I found the below limitation in the Elastic documents(https://www.elastic.co/guide/en/logstash/current/persistent-queues.html#persistent-queues-limitations).
Limitations of Persistent Queues
edit

The following are problems not solved by the persistent queue feature:

**Input plugins that do not use a request-response protocol cannot be protected from data loss**. For example: tcp, udp, zeromq push+pull, and many other inputs do not have a mechanism to acknowledge receipt to the sender. Plugins such as beats and http, which do have an acknowledgement capability, are well protected by this queue.

In our project we are using the below code as input:

input{
udp {
port => "${UDP_PORT:514}"
type => syslog
}
}

So in this case, can we use the persistence queue ?

My understanding is we should not use the " persistence queue". If it is wrong please correct me.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.