I have an elasticsearch + kibana installation and soon I'll start handling lots of data. The server I'm using has 8GB of RAM, so after a while I will have used up using all the servers RAM. When this happens I want to be able to export the data into either JSON or CSV format and then flush the index automatically, without any data loss if possible.
Is there a feature that does this or will I have to write it myself using something like elasticsearch-dump? What would be the best way to solve this?
Elasticsearch persists data to disk, and uses RAM as working memory for the process. This means that the amount of data that a cluster can persist will be limited to the disks attached to the cluster and are configured in the elasticsearch.yml configuration file using the path.data setting, and not the size of RAM. Where RAM may become a limiting factor is in being able to handle the amount of operational load placed on the cluster.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.