I understand that NFS storage should be avoided. However, my task was to write fresh data into HOT SSD and potential spill to WARM spin disk. The NFS storage will be used as COLD tier with very minimal write.
Does anyone have such a setup and whether you face any problems?
My initial test was to stress test the NFS Cold tier with writes and i notice " nfs: server [...] not responding, still trying" error and nodes starts to leave and rejoin repeatly. I have to reboot the node to restore cluster stability. Is it advisable to change the following fault detection timeouts to prevent nodes leaving? Eg. 60s in line with the nfs mount timeo=600 cluster.fault_detection.follower_check.timeout cluster.fault_detection.leader_check.timeout
If you plan on using NFS for cold tier where there should be no writes, why stress test it with a write heavy workload? I would recommend testing it with respect to search performance only as that is more realistic.
I would recommend not changing the settings you mentioned.
My worry is that the incoming data can be uncertain (backlogs can happen)
Also, i believe during node failure & data transition from WARM to COLD is considered write as well
I want to know the maximum write i can run smoothly and potentially any configuration that can push the limit
If you use rollover, e.g. through data streams, instead of indices with date in the name data will always go to the latest underlying index, which should be on the hot nodes. Indexing is much more I/O intensive than relocating shards as this primarily copies reasonably large files, resulting in large sequential writes without a lot of the fsyncing indexing results in.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.