Logstash one single point failure

Hi,everyone:
this is my ELK structure:
image
I am afraid that if the logstash host crash, and all the system can't run.
so if there's a way to prevent the situation occur, such as joining a logstash to stand by.

thank you in advance!

Have you read https://www.elastic.co/guide/en/logstash/current/deploying-and-scaling.html? What kind of inputs and filters do you have?

this is my logstash config:

input{
tcp{
	host => "192.168.0.159"
	port => 5510
	codec => json
	type =>"ntopng-*"
}
}
filter{
	if[type]=="ntopng-*"
	{
		if "" not in [IPV4_SRC_ADDR] and "" not in [IPV6_SRC_ADDR]
		{
			drop{}
		}
	}
}
output{
elasticsearch {
            codec => "json" 
            hosts => ["192.168.0.159:9200","192.168.0.58:9200","192.168.0.154:9200"]

    }
if[type]=="ntopng-*"
{
	stdout{codec=> rubydebug}
}
}

it seems beats can load balance across a group of Logstash nodes?

thank you :slight_smile:

it seems beats can load balance across a group of Logstash nodes?

Yes. If your TCP clients can do that or if you can stick two or more Logstash instances behind a load balancer that's another simple option.

which load balancer do you suggest? :slight_smile:
I have researched kafka, I have some problems in sending data to logstash.

which load balancer do you suggest?

I don't have sufficient experience with load balancers to give a specific recommendation.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.