I have an elasticsearch cluster running on kubernetes. Currently elasticsearch writing indices data on local filesystem (path: /usr/share/elasticsearch/data) as shown below.
Since elasticsearch is producing a very large amount of data, So I want to write this data directly on hadoop filesystem rather than on local filesystem.
Is there any simplest way to do this ??
Note: I am not telling about Snapshot/Restore which is mentioned here https://www.elastic.co/guide/en/elasticsearch/reference/7.6/snapshot-restore.html