data ingestion is about 800Million record a day of system metric via logstash
nothing else running in this systems except Elasticsearch
What other parameter needs to be set to make sure memory gets use properly and efficiently. I can test this out once I know something more.
I am very naive on java part
By default Elasticsearch configures itself according to the machine on which it's running so it's best not to override any of these settings yourself. 800M documents per day is ~10k per second which is nowhere near the throughput that some benchmarks can achieve using the default settings, so you should be fine.
No, I think it matters if you override any of these settings, it can make performance a lot worse to get it wrong. So don't do that. Let Elasticsearch set these things for you.
David you right.
so far I have never had to touch that and it was working beautifully.
but I have gone to Container/VM route and since then kibana dashboard loading is slow , very slow.
even though I have gone from ssd to nvme.
on host/vm level I can see the IO speed is more then four time, new system has faster cpu, network speed is also doubled.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.