We are evaluating Elastic with the idea to use X-Pack on a self-hosted environment.
The installation of the ELK stack (Elastic, Kibana and Logstash) was successful and we have installed three log gatherer as well:
logstash-* : for local system logs on the ELK server
metricbeat-* : to receive metrics from one of the dev server
winlogbeat-* : to receive windows events from a windows server.
The activity on the three server is very limited, I would say a user is active on each server about 30 mins a day.
The ELK server:
Ubuntu Trust 14.04
VPS
200 GB of SSD
6 cores, 64bit
8GB of RAM
ELK Stack 5.3
From what I have read, this should be way enough for the activity described above, yet I keep getting two errors:
On "winlogbeat-*" > Discover (and other indices), it takes > 30sec and times out.
In the logs in /opt/elk/elasticsearch/elasticsearch.log I have the following message non-stop:
[2017-04-16T16:21:07,794][WARN ][o.e.m.j.JvmGcMonitorService] [SFm5d9i] [gc][135708] overhead, spent [4.6s] collecting in the last [4.9s]
[2017-04-16T16:21:11,796][INFO ][o.e.m.j.JvmGcMonitorService] [SFm5d9i] [gc][135712] overhead, spent [352ms] collecting in the last [1s]
[2017-04-16T16:21:18,310][WARN ][o.e.m.j.JvmGcMonitorService] [SFm5d9i] [gc][135714] overhead, spent [4.9s] collecting in the last [5.5s]
[2017-04-16T16:21:22,311][INFO ][o.e.m.j.JvmGcMonitorService] [SFm5d9i] [gc][135718] overhead, spent [427ms] collecting in the last [1s]
[2017-04-16T16:21:28,499][WARN ][o.e.m.j.JvmGcMonitorService] [SFm5d9i] [gc][135720] overhead, spent [4.9s] collecting in the last [5.1s]
What could be the problem ? We're initially looking for a simple remote-log-viewer of windows event logs and really liked the long-term other features we could use of an ELK stack.
I see that your node is in a constant state of garbage collection. I see that you're only running with a 1g heap, which is possibly too small for your data. Yet, can you verify something for me? Can you run the command jps -l -m -v and share the output here? I would like to see all the options that the Elasticsearch JVM was started with, the htop output only shows a truncated version of the command line (also, please share text rather than screenshots). Are you running all of Elasticsearch, Logstash, and Kibana on the same server?
Thanks, but this is not what I'm looking for (I can already see in your htop output that the heap size is 1g). I need to see the rest of the JVM args on the running Elasticsearch process. Can you install the JDK?
I ended up re-installing everything using a self-made Ansible script that followed the doc. As you provide an Ansible script for ES, I just had to add kibana and logstash and that was easier than trying to understand what Bitnami packaged in their installer. I have set VM heap size at 4g and it works well so far.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.