Filebeat sends logs into cluster


(sahere rahimi) #1

hi all

I installed filebeat on VM1 and installed two nodes of elasticsearch on VM2 and VM3 which these two nodes make a cluster. Now, i want to ship logs into cluster using filebeat. in the "output.elasticsearch" part of filebeat.yml, should i write the address of both nodes or just one of them?. any comments should be appreciated. many thanks.


(Anthony Lazam) #2

Hi Sahere,

You can use both nodes for the output in your filebeat.yml file. It will be something like this:

output.elasticsearch:
  hosts: ["http://vm1:9200","http://vm2:9200"]

It is distributed in a round-robin fashion, so if the other node is down then it will automatically try it in the other node.

One thing to be aware of is when you have 2 ES nodes is the quorum issue if enabled. The quorum of two is two, so ideally it should be (n/2) + 1.


(sahere rahimi) #3

Many thanks.
how should i set number of primary and replica shards and what amount should they be? and should both of nodes be master node?
Also, is it needed to define node.master, node.data and node.ingest for these two nodes?


(Sai Krishna) #4

Hi sahare rahimi,

Though I am also new to elasticsearch, but i can tell to you that number of primary shards and replica shards completely depend on how much amount of data ,you have to manage.i suggest you to go through this medium post which have great insight about cluster designing.

Medium Post for cluster designing


(sahere rahimi) #5

Many thanks.


(system) #6

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.