What is the best way to implement ELK stack for a log monitoring system on Windows. I have 2 servers with windows server 2019 installed.
Is there any way you can increase this to 3 nodes? You will likely run into problems with master nodes and quorum with 2 nodes.
Not knowing anything about your intended use - I would recommend the following as a set of basic rules to try and get you started:
*Make sure your data is stored on a separate disk to the operating system to reduce latency
*Try and make sure you have replica shards for all data in your index definitions to allow for hardware failure
*If you are unable to use a load balancer for the 2 nodes, split the client and master roles (point clients via DNS at node-1, establish node-2 as master
*make sure you have a good backup routine in place
Sorry, I know a lot of this is just good general advice for all servers - but if you can be more specific in what you want to accomplish from Elastic (throughput, expected data volumes, features you will/won't use, what type of data you want to ingest etc) - I can try to give some more specific advice
Welcome to elasticsearch @nityaraj06!
It's ideal to run with more than 2 nodes, but it is possible to run a healthy cluster with 2 nodes. You'll need to make 1 of the nodes master eligible and not set the other node to master eligible. This way you avoid "split brain". "Split brain" is a situation where some unplanned failure event happens and both nodes attempt to become master at the same time. Neither node can agree or elect a master node because it's 1v1 vote and a 3rd vote is needed to make that decision of one over another. Since there is no 3rd node to vote, that decision is never made.
Here is the config for setting a master node in the configuration: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html#master-node
Next, install it and give it a try!
Thank you for your replies.
I am planning to configure using 2 nodes in the following way - configs from elasticsearch.yml:
- installed Elasticsearch
discovery.seed_hosts: ["", ""]
- installed elasticsearch
discovery.seed_hosts: ["", ""]
I will install Kibana on the master node.
Kindly let me know how this sounds to you
Why do you want a master only node?
That does not sound great as any of the nodes going down will cause problems for you cluster.
I set one node to master and the other to data, isn't it necessary to have one master node atleast? Could you please suggest a basic setup with 2 nodes.
The log files I plan to analyse are mostly under 1gb per month, ~12gb per year
I am using ELK 7.6 version and setting this up on Windows server 2019.
If you want to be production ready, you need 3 nodes.
If you don't care about freezing the whole cluster when one node is done, you might use 2 nodes but I'd not recommend that.
If you don't care at all about your data, one node is enough then. Like in dev platform.
If you go for 2 or 3 nodes, then leave all nodes with node.data and node.master default settings. They will be all holding your data and they will be all master eligible.
Thank you @dadoonet - I am fine with having 2 nodes for now since this is not a production setup.
I am keeping both nodes as master eligible to handle the master node going down
Please suggest if the below design sounds good:
Kibana on node 2
This isn't true @mattsdevop: Elasticsearch will elect a master from 2 nodes just fine. The only drawback with having 2 master-eligible nodes is that it's not resilient and requires both nodes to be available at all times.
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
ip1 13 19 0 dilm - node1
ip2 25 13 0 dilm * node2
Also could you please let me know how I can test failover conditions with a 2 node cluster.
m indicates that both nodes are master-eligible. The
* indicates which one is the currently elected master, of which there will always only be at most one in a cluster.
With a 2 node cluster you should be able to continue reading data (assuming indices has 1 replica shard configured) if one of the nodes fail. With only one node available the cluster will not be able to elect a new master node, which means that writes will fail. In order to have a fully operational cluster in case 1 node fails you need at least 3 master eligible nodes in the cluster.
As Christian says, a 2 node cluster does not support failover, so there are no failover conditions to test.
Yes @DavidTurner. I poorly explained that. Thank you for pointing that out!
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.