Failing shards during indexing. Storage issue or misconfiguration? Using kubernetes


I've setup kubernetes on baremetal and try to run a three node cluster there. In my dev environment I am currently using glusterfs which also runs in kubernetes for storage.

I am getting these kinds of errors:

   "message":"failing shard [failed shard, shard [plx_session-2019.w28][0], node[rQnFrxrFSsiXgGmshVtGGg], [R], s[STARTED], a[id=GbmKjl78SnS7IUiM-G23-Q], message [failed to perform indices:data/write/bulk[s] on replica [plx_session-2019.w28][0], node[rQnFrxrFSsiXgGmshVtGGg], [R], s[STARTED], a[id=GbmKjl78SnS7IUiM-G23-Q]], failure [RemoteTransportException[[poc-es-master-1][][indices:data/write/bulk[s][r]]]; nested: AlreadyClosedException[translog is already closed]; ], markAsStale [true]]"
"stacktrace": ["org.elasticsearch.transport.RemoteTransportException: [poc-es-master-1][][indices:data/write/bulk[s][r]]",
"Caused by: translog is already closed",
"at org.elasticsearch.index.translog.Translog.ensureOpen( ~[elasticsearch-7.1.1.jar:7.1.1]",
"at org.elasticsearch.index.translog.Translog.add( ~[elasticsearch-7.1.1.jar:7.1.1]",
"at org.elasticsearch.index.engine.InternalEngine.index( ~[elasticsearch-7.1.1.jar:7.1.1]",
"at org.elasticsearch.index.shard.IndexShard.index( ~[elasticsearch-7.1.1.jar:7.1.1]",
"at org.elasticsearch.index.shard.IndexShard.applyIndexOperation( ~[elasticsearch-7.1.1.jar:7.1.1]",
"at org.elasticsearch.index.shard.IndexShard.applyIndexOperationOnReplica( ~[elasticsearch-7.1.1.jar:7.1.1]",
"at org.elasticsearch.action.bulk.TransportShardBulkAction.performOpOnReplica( ~[elasticsearch-7.1.1.jar:7.1.1]",
"at org.elasticsearch.action.bulk.TransportShardBulkAction.performOnReplica( ~[elasticsearch-7.1.1.jar:7.1.1]",
"at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnReplica( ~[elasticsearch-7.1.1.jar:7.1.1]",
"at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnReplica( ~[elasticsearch-7.1.1.jar:7.1.1]",

Here at I've placed logfile and kubernetes statefulset configuration I am using.

Is this issue triggert because of the underlying storage provider glusterFs, or is something misconfigured in my es-cluster which has nothing to do with the underlying storage?


Hi @asp,

on the surface this looks like a storage problem. It seems the nodes thinks some of the files got truncated or otherwise corrupted behind their back.

As far as I can see, the configuration starts 3 nodes (k8s is not my strong side). Do they all have similar issues in their logs?

I wonder if you could be using the same path/mount for all nodes? If glusterfs does not support or is not configured to have proper file locking, weird things could certainly happen. It is recommended to have separate data paths per node.

I will check if all nodes have similar errors.

Each node has it's own volume, so in fact data per node is logically separated.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.