hi every one , there is a problem in my cluster after the data getting bigger and bigger the problem occur more , the issue is my dashboard didn't load when the data increased from 800GB to 1.5 TB my cluster is ok for every transaction but for opening this dashboard i have a error that raised in elasticsearch log like something like this ::
> [2019-07-16T11:04:10,765][INFO ][o.e.m.j.JvmGcMonitorService] [DataNode-02] [gc][72362] overhead, spent [362ms] collecting in the last [1.1s]
> [2019-07-16T11:04:36,068][INFO ][o.e.m.j.JvmGcMonitorService] [DataNode-02] [gc][72387] overhead, spent [283ms] collecting in the last [1.1s]
> [2019-07-16T11:04:47,070][INFO ][o.e.m.j.JvmGcMonitorService] [DataNode-02] [gc][72398] overhead, spent [353ms] collecting in the last [1s]
> [2019-07-16T11:04:50,071][INFO ][o.e.m.j.JvmGcMonitorService] [DataNode-02] [gc][72401] overhead, spent [332ms] collecting in the last [1s]
What is the specification of your cluster in terms of version, node count, hardware and heap sizes? How many indices and shards do you have in the cluster?
about 97% in 3 servers and other as usual , another question that i want to ask , is my heap size ok ? the server has 64 GB Ram and i set the heap size just 32 GB , Can i set more Heap size in this server ?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.