How does logstash route the data to a NEW primary ES node, not the old one?


(Jacob Amith) #1

I'm trying to wrap my head around something. I have 3 ES nodes, with just 1 shard for simplicity. The first node is just the master node. The second node holds the primary shard and the third one holds the replica shard.

Master Node: 10.42.0.100:9200

Data node1 (Primary): 10.42.0.101:9200

Data node2 (Replica): 10.42.0.102:9200

This is my config from logstash, where I write the data:

output {
        elasticsearch {
                hosts  =>  ["10.42.0.101:9200"]
                index => "twitter"
                document_type => "tweet"
                template => "/etc/logstash/template/twitter_template.json"
                template_name => "twitter"
        }
} 

Everything looks good and logstash will write the data to my primary ES node. However - what if that node completely dies? How do I make it failover and write to the replica node?

According the elastic.co documentation, the master node keeps track of all this and will assign a new primary node if something goes wrong. However, my logstash config doesn't know this since it's hardcoded to the first node. How can I notify logstash that the primary is down and a new one has been assigned?

First, I was thinking of this kind of configuration.

output {
        elasticsearch {
                hosts  =>  ["10.42.0.101:9200", "10.42.0.102:9200"]
                index => "twitter"
                document_type => "tweet"
                template => "/etc/logstash/template/twitter_template.json"
                template_name => "twitter"
        }
} 

Writing data to both the replica and primary - but this is just wrong right? The primary already replicates data to the second node so it doesn't make any sense to write to them both.


#2

It does not write to both -- "If given an array it will load balance requests across the hosts specified in the hosts parameter."


(Jacob Amith) #3

Ok, but either way - is writing data to both the replica and primary the way to go here - in order to be fault tolerant?


(Magnus Bäck) #4

Ok, but either way - is writing data to both the replica and primary the way to go here - in order to be fault tolerant?

Again, saying "writing to both" is a misnomer since that's not what happens. But yes, list all known ES nodes in the elasticsearch output (and consider enabling the sniffing option) so that Logstash sends requests to any available node and lets the ES cluster figure out which node has the primary shard for each document that's to be stored. (For clusters sufficiently big to have master-only nodes it's a good idea to avoid those nodes.)


(Jacob Amith) #5

Ok, thanks. So I should list all data nodes - including the replicas? Is it possible to solely enable the sniffing option and leave the hosts parameter blank, since the sniffing adds them to the hosts list anyway?


#6

Yes. Think about what happens if 10.42.0.101 crashes. The replica gets promoted to primary and everything should keep on running. If you do not include 10.42.0.102 you will not be able to failover.


(Magnus Bäck) #7

So I should list all data nodes - including the replicas?

Replica nodes do not exist. Shards have primaries and (possibly) replicas. The shard a particular document ends up in is entirely determined by ES and is not observable from Logstash.

Is it possible to solely enable the sniffing option and leave the hosts parameter blank, since the sniffing adds them to the hosts list anyway?

How would the sniffing code know which ES host to contact in the first place?


(system) #8

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.