When elasticsearch gets full, what do I do?

Hello!

I have an elasticsearch + kibana installation and soon I'll start handling lots of data. The server I'm using has 8GB of RAM, so after a while I will have used up using all the servers RAM. When this happens I want to be able to export the data into either JSON or CSV format and then flush the index automatically, without any data loss if possible.

Is there a feature that does this or will I have to write it myself using something like elasticsearch-dump? What would be the best way to solve this?

Thanks!

Elasticsearch persists data to disk, and uses RAM as working memory for the process. This means that the amount of data that a cluster can persist will be limited to the disks attached to the cluster and are configured in the elasticsearch.yml configuration file using the path.data setting, and not the size of RAM. Where RAM may become a limiting factor is in being able to handle the amount of operational load placed on the cluster.

Typically (although, it depends :slightly_smiling_face:) , you should assign half of the available RAM to the Elasticsearch process (up to but no more than ~32GB), leaving half for other OS processes. Using bootstrap.memory_lock to lock the process address space to RAM and disabling swapping is a consideration for a production environment.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.