Primary Index Randomly Getting Deleted.. possibly by Client Node

I have a 5 node cluster, with two data-only nodes, two master-eligible nodes and one client node. The client and master-eligible nodes are also my Logstash servers, as they are gargantuan servers that are underutilized when just running one instance of ES.

I'm testing this right now, but it appears that whenever my client node has ES turned on, the primary index gets randomly deleted between 5-120 minutes after it's creation, and the master node logs:

[2016-09-27 08:48:43,542][WARN ][transport ] [phys-node2] Transport response handler not found of id [38656492]
[2016-09-27 08:48:43,543][WARN ][transport ] [phys-node2] Transport response handler not found of id [38656480]
[2016-09-27 08:48:43,543][WARN ][transport ] [phys-node2] Transport response handler not found of id [38656489]
[2016-09-27 08:48:43,688][WARN ][transport ] [phys-node2] Transport response handler not found of id [38656510]
[2016-09-27 08:48:43,698][WARN ][action.bulk ] [phys-node2] unexpected error during the primary phase for action [indices:data/write/bulk[s]], request [BulkShardRequest to [filebeat-2016.09.27] containing [1] requests]
[filebeat-2016.09.27] IndexNotFoundException[no such index]
at org.elasticsearch.cluster.routing.RoutingTable.shardRoutingTable(RoutingTable.java:108)
at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.doRun(TransportReplicationAction.java:461)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at org.elasticsearch.action.support.replication.TransportReplicationAction.doExecute(TransportReplicationAction.java:131)
at org.elasticsearch.action.support.replication.TransportReplicationAction.doExecute(TransportReplicationAction.java:82)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:137)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:85)
at org.elasticsearch.action.bulk.TransportBulkAction.executeBulk(TransportBulkAction.java:309)
<!code>

And the data nodes log a similar warning;

[2016-09-27 08:48:38,820][WARN ][action.bulk ] [hyd-mon-storage01] unexpected error during the primary phase for action [indices:data/write/bulk[s]], request [BulkShardRequest to [filebeat-2016.09.27] containing [6] requests]
[filebeat-2016.09.27] IndexNotFoundException[no such index]
<!code>

While the client node did not produce any relevant logs in /var/log/elasticsearch/*.log

My ES configs:

Master-eligible nodes:

node.name: phys-node2
node.master: true
node.data: true
<!code>

Data-Only Nodes:

node.name: hyd-mon-storage01
node.master: false
node.data: true
<!code>

Client Node:

node.name: load-balance-node
node.master: false
node.data: false
<!code>

While reviewing the elasticsearch.yml for the client node, I did notice I had the setting:

node.max_local_storage_nodes: 1
<!code>

I modified node.max_local_storage_nodes to zero on the client node, as it shouldn't have any storage nodes. I fired back up the problematic client node and will report back whether or not this max_local_storage_nodes was the culprit.

My Versions:

ES: 2.4.0
LS: 2.2.4
K: 4.6.1
F: 1.3.0

EDIT: after modifying node.max_local_storage_nodes I'm still seeing my filebeat index randomly get deleted, anyone have any idea how to troubleshoot? I'm not positive that my client node is the issue but I believe it is.. I'm turning this node off for the time being in order to further narrow down the issue.

So my filebeat index was deleted again at almost exactly noon.

The relevant logs from all servers are very similar;

Master Node:

[2016-09-27 11:59:47,013][INFO ][cluster.metadata ] [phys-node2] [filebeat-2016.09.27] creating index, cause [auto(bulk api)], templates [filebeat], shards [5]/[1], mappings [default]

[2016-09-27 11:59:48,429][INFO ][cluster.routing.allocation] [phys-node2] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[filebeat-2016.09.27][2], [filebeat-2016.09.27][0], [filebeat-2016.09.27][2], [filebeat-2016.09.27][3], [filebeat-2016.09.27][4]] ...]).
<!code>

Standy Master Node:

[2016-09-27 11:59:44,558][WARN ][transport ] [phys-node2] Transport response handler not found of id [43221768]

[2016-09-27 11:59:47,013][INFO ][cluster.metadata ] [phys-node2] [filebeat-2016.09.27] creating index, cause [auto(bulk api)], templates [filebeat], shards [5]/[1], mappings [default]

[2016-09-27 11:59:48,429][INFO ][cluster.routing.allocation] [phys-node2] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[filebeat-2016.09.27][2], [filebeat-2016.09.27][0], [filebeat-2016.09.27][2], [filebeat-2016.09.27][3], [filebeat-2016.09.27][4]] ...]).
<!code>

Data Node 1 & 2 (limited due to this page's 5000 char limit)

[2016-09-27 12:01:38,811][WARN ][action.bulk ] [hyd-mon-storage01] unexpected error during the primary phase for action [indices:data/write/bulk[s]], request [BulkShardRequest to [filebeat-2016.09.27] containing [30] requests]
[filebeat-2016.09.27] IndexNotFoundException[no such index]
at org.elasticsearch.cluster.routing.RoutingTable.shardRoutingTable(RoutingTable.java:108)
at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.doRun(TransportReplicationAction.java:461)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at org.elasticsearch.action.support.replication.TransportReplicationAction.doExecute(TransportReplicationAction.java:131)
at org.elasticsearch.action.support.replication.TransportReplicationAction.doExecute(TransportReplicationAction.java:82)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:137)
at org.elasticsearch.action.support.replication.TransportReplicationAction$OperationTransportHandler.messageReceived(TransportReplicationAction.java:244)
at org.elasticsearch.action.support.replication.TransportReplicationAction$OperationTransportHandler.messageReceived(TransportReplicationAction.java:241)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:77)
at org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(MessageChannelHandler.java:227)
at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:116)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:75)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
.....
<!code>

And the client node has no record in the logs regarding the index.

Does anyone have any ideas?

I've still found no solution for this issue. Primary index was definitely getting deleted by the client node, as I've disabled ES on that node and my primary index hasn't been affected in days since.

It's interesting that only the primary index was affected, as I have nearly a dozen other indices that were not deleted, nor showed any other uncharacteristic behavior.

As Elasticsearch is a consensus based system, you should always aim to have three master eligible nodes, not 2 as in your current cluster. You also need to set minimum_mster_nodes to 2, assuming you have 2 or 3 master eligible nodes.

I am not sure if this is related to the issues you are seeing, but it would be wise to eliminate this misconfiguration as a potential cause.

Good call, thanks Christian. I'd set the minimum_master_nodes to 1 initially for failover, but I understand the split brain potential issue.

As a follow-up, do you think it'd be wise to turn what was a client node into a master-eligible node so that I have redundancy in case one node goes down? I was trying to implement the client node for query performance tuning, but obviously if it's deleting my data, that's a no go.

Another way to alter the cluster could be to make the data nodes master eligible, keep one of the dedicated master nodes and convert the other one to a client node if the host specification allows. That would leave you with 2 client nodes, which removes the original one as a single point of failure. With this setup, it is possible that one of the data nodes could be elected master, but whether this is an issue or not depends on the expected load on the cluster.

I was okay with the client node being the single point of failure, as everything is still being logged on the backend, you just wouldn't be able to access the front end until the client node was restored -- that's fine with me, the issue is that when I attempt this, it deletes my data, possibly due to the split-brain issue you mentioned earlier.. I may backup this index and try the client node instantiation again this evening, having upped the 'minimum_master_nodes' to two should disallow the client node from deleting my data (fingers crossed).

I'll allow another to be a master-eligible node for the sake of redundancy but have to admit I'm still concerned my data will get lost. Additionally, all these are rather-beefy physical machines -- it'd be a shame to make another one a client node, as the node I'm looking to have be the client node is a VM.

UPDATE: I modified the minimum_master_nodes' to two and my data still got deleted when I added the client node.

In the meantime, I made that server a 'master-only' node in hopes that'll solve the issue.. it's getting pretty frustrating having my data be deleted every few hours.

Additionally, the cluster status is stuck in 'Red', even though 96% of shards are allocated, 14 are left just hanging, reallocation is set to true.. I have no idea what's going on with my stack anymore.

Keeping that server as 'master-only' didn't stop my data from getting deleted. The indices somehow still exist but have no shards assigned to them anymore, hence the Red status of my cluster. This must be because I set:

action.disable_delete_all_indices: true
action.destructive_requires_name: true
<!code>

Furthermore, you can see when my indices were deleted. This is an exact example of the issue I'm seeing.

I pulled this node out of the cluster as I can't afford to have my data just randomly deleted like this.

Now I suppose I have to go explicitly delete these empty indices in order to even get my cluster in the green again.

action.disable_delete_all_indices: true

This used to mean that you could not use _all to delete all indices accidentally. However, this setting hasn't existed since before 1.x.

action.destructive_requires_name: true

This was the replacement for that setting and it means that you must name indices rather than accidentally deleting them all.

If you manage to get past that setting, then you can delete the index (or indices), which means it will delete the shards and the cluster state-related details too.

This issue generally presents itself in 1.x/2.x because of a lack of index tombstoning (new in 5.0). When a node goes offline, for whatever reason, it doesn't stop you from deleting an index that had shards on that node. So, when the node comes back online, it presents those shards (and that partial index) back to the cluster as "dangling indices". The cluster then mistakingly tries to allow it to partially return rather than instruct it to delete those shards. In 5.0, we remember the index's UUID so that if something comes back, then we can say "no, you should delete that".

As a result, in 1.x / 2.x, you get stuck in a red state because a partial index is never going to be a complete index.

As for the Marvel screenshot:

Marvel may be misleading you here because of how it works. The index summary is "Not Available" for that index because the cluster doesn't know about that index in the time range that you're looking at it. That generally means that the index has been deleted (or it's temporarily gone because all of the relevant nodes are missing). But, importantly, that's the latest detail about it in that time window.

The charts however span the entire time window. So there was activity and then suddenly there's none when the index was deleted. This is true.

1 Like

Wow, you're completely right. This used to be a test box and a cronjob had been added to delete the primary index at a set interval (likely due to the storage being so low).

That's what I get for reusing equipment without reimaging =)

Thanks a bunch for looking into this @pickypg you were completely correct in your hunch that a curator job was deleting my data.

1 Like