Write elasticsearch's data directly on HDFS rather than on local filesystem

Hi,

I have an elasticsearch cluster running on kubernetes. Currently elasticsearch writing indices data on local filesystem (path: /usr/share/elasticsearch/data) as shown below.

image
Since elasticsearch is producing a very large amount of data, So I want to write this data directly on hadoop filesystem rather than on local filesystem.

Is there any simplest way to do this ??

Note: I am not telling about Snapshot/Restore which is mentioned here https://www.elastic.co/guide/en/elasticsearch/reference/7.6/snapshot-restore.html

Data should always be on a local filesystem/volumes in order to be accessed as fast and efficient as possible.

You could use HDFS for something like snapshots, but not the data currently in use.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.