I have graylog and elasticsearch deployed in kuberenetes. Three graylog nodes as cluster and three elasticsearch with 1 master and two master eligible nodes. The spec of the worker nodes is 64 core and 128gb ram and 20TB disk on each worker node. We are getting 2tb logs per day. Logs are coming from kuberentes cluster and other applications each with differnet size. Right now Iam only getting 7000 message output per sec so getting around 20 to 25k messages processed per second but the input rate is 20k to 100k per second so most of the messages are going to journal with millions of unprocessed messages and Iam unable to view the logs from the dashboard most of the time.
The CPU usage of the graylog and elasticsearch is low so I am unable to figure out the bottleneck here..
Graylog -5-0 32gb heap
elasticsearch - 7.10 32gb heap
mongo -5.0
Graylog leader and elasticsearch master is on dedicated workernodes and the remaining are on shared workernodes.
Please help I want to use all my cpu resources and increase the processing to 100k message per second. What elase I need to do add more data nodes but the current servers are underutilized. What iam missing here need assistance
The question why my server is underutilized as it is not using the server resources properly is it because not setting heap more than 32gb. can I run multiple instance on the same node for better utilization with dedicated nodes for master and dedicated for data