Hi!
I am new at elastic stack, I have installed Filebeat in around 15 servers (15 diferent computers (some Windows some Linux)) to get Tomcat logs and JMX data.
I have installed in another computer (Linux), Logstash (to create elasticsearch indexes for those logs/JMX data), Elasticsearch to keep and make queries of that data, and Kibana.
The hardware specifications of this Logstash/ElasticSearch/Kibana computer are as follow:
Number of processing units: 2
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 2
Model name: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
cpu MHz : 2199.998
cache size : 56320 KB
I have 1 GB RAM memory assigned to Elasticsearch
What Kibana tells me I have in Elasticsearch:
Overview Version 6.3.0
Uptime 4 hours
Nodes: 1
Disk Available 79.25% 118.8 GB / 149.9 GB
JVM Heap 66.47% 1.3 GB / 2.0 GB
Indices: 247
Documents 64,935,866
Disk Usage 27.2 GB
Primary Shards 1,067
Replica Shards 0
What Kibana tells me I have in Logstash:
Overview
Events Received 4.9m
Events Emitted 4.9m
Nodes: 1
Uptime 3 days
JVM Heap 54.57% 263.8 MB / 483.4 MB
Pipelines: 1
With Memory Queues 1
With Persistent Queues 0
What Kibana tells me I have in Beats:
Beats: 23
Filebeat 23
When I execute a "top" command in the logstash/elasticsearch/kibana computer I get:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
XXXXXX beat 20 0 30.8G 1.9g 16372 S 98.2 50.5 197:25.77 java
If I stop elasticsearcg service, that process stops, so it is elasticsearch who is consuming so much virtual memory and processor. %CPU varies between 20% and 170%, it is most of the time bellow 100%
Now the question, do I need a more powerfull computer, just more memory, or just a change in my configuration files?
Thanks a lot in advance!!