Best practice for a logging cluster

I'm setting up a new logging cluster consisting of 3x dedicated master nodes, 6x dedicated data nodes and 1-2x client nodes to handle Kibana requests.

Now I'm wondering what is the best practice of ingesting data from Logstash. Up until now I have had an client instance of Elasticsearch on the Logstash indexing nodes and it has been going ok, but not great during cluster load.

The methods I have come up with are

  • Continue with one elasticsearch client node instance per indexing computer
  • Have dedicated client nodes that Logstash sends to
  • Share the client node that will be used for Kibana with Logstash (totally different function so it might affect memory usage)
  • Have Logstash send to all 6x data nodes, no client nodes for Logstash

In this case there is no real best practise, it's what works for you .

Trial and error here I come :slight_smile:

Going to start with sharing the client nodes with Kibana, easiest to start with and the lowest risk

Yep, that's definitely the best place to start!