Disk write and eviction


(Binuraj P Sasidharan) #1

Hi Team,

Elastic search able to write the data into disk?

I think the maximum heap size of the elastic search is 32GB. Once they reach the maximum will get OutOfMemory exception. Is there is any way to evict the old data from elastic search?


(Mark Walkom) #2

All data is stored on disk anyway, what makes you think it's stored in memory?


(Binuraj P Sasidharan) #3

Thanks for your reply Mark.

You mean while configuring ES we need to consider only disk space? Or we should consider memory size also? What is the purpose of configuring ES_HEAP_SIZE? Please find the attached image and let me know the size marked as red.

[cid:image001.png@01D122AC.6487B670]

Thanks & Regards,
Binuraj P S


(Mark Walkom) #4

Heap size relates to what the JVM can use for things like querying and aggregating.

You need to take both into account, but there is no set algorithm to provide what the ratio is as each use case is different.


(Binuraj P Sasidharan) #5

Data always taken from disk while querying from Kibana? Or the data available in Heap memory only?

Thanks & Regards,
Binuraj P S


(Magnus Bäck) #6

If the needed data is available from the heap it will be used. Otherwise the disk will be consulted.


(Binuraj P Sasidharan) #7

Thank you Magnus for your support.

Please let us know the best ES_HEAP_SIZE configuration? It is required to configure indices.fielddata.cache.size also? If yes please let us know the best consfiguration.

Thanks & Regards,
Binuraj P S


(Magnus Bäck) #8

The typical advice is to set the heap size to 50% of RAM but no greater than 30 GB. Ideally you shouldn't have to set indices.fielddata.cache.size but if you don't have sufficient RAM is might be necessary. See https://www.elastic.co/guide/en/elasticsearch/guide/current/_limiting_memory_usage.html#fielddata-size.


(Binuraj P Sasidharan) #9

Thank you for your prompt reply.

Thanks & Regards,
Binuraj P S


(system) #10