Usage of GPFS file system on 10 node cluster

We are setting up a 10 node ES cluster for one of our clients and will be loading/querying ES data in the terabytes. The current system was configured with a GPFS file system. I have some concerns with this related to network traffic. Does anyone have any experience supporting very large clusters running on GPFS or shared file systems. Is this a good idea? We do not have local storage available? Are there any other options? NFS/ etc... SOrry, I'm not a storage guy but would appreciate any help!


GPFS - Wikipedia?

If so I wouldn't, running a distributed system on a distributed FS is asking for slowness.

I'm not sure if we're going to have a choice about the file system. Although I'd like to tell my customer why it's not a good choice.

If we go this route are there any configurations or anything that would help with performance.

You have to wait for a query to go from node 1 to node N in your cluster. That node N then needs to go to location A on the clustered filesystem to collect the data to bring back for whatever work you need to do.

Or if you lose a node then ES will try to reallocate, which will also impact the FS as it realises that a) it has lost some part of the overall store, then b) as it deals with ES reallocating (ie lots of IO) and also c) rebalancing itself.

You may want to consider or test shadow replica indices if you encounter problems with your storage. It is however worth noting that this feature is marked as experimental.