I'm trying to load around 20 GB of data into an index. My problem is that while I do this, all of a sudden the uuid, docs.count and store.size changes after querying:
curl http://localhost:9200/_cat/indices?v
An example:
green open be eEufIf0GTUmBNsFOhgu7Nw 5 1 935877 41632 15.8gb 7.9gb
I get this response. Then after some more files have been uploaded into ES, same query gives me:
green open be oZxXwAAoRiS0Vt6XsGSUzw 5 1 462018 7922 4.6gb 2.2gb
A different uuid, half the number of docs and a third of the storage.
Anyone has a clue of what's happening? My cluster is totally isolated from internet within GCP with no external ip:s.
2019-04-06T15:35:06,021][WARN ][o.e.a.b.TransportShardBulkAction] [elasticsearch-elastic-vm-0]
unexpected error during the primary phase for action [indices:data/write/bulk[s]], request
[BulkShardRequest [[be][4]] containing [524] requests]
org.elasticsearch.index.IndexNotFoundException: no such index
at org.elasticsearch.cluster.routing.RoutingTable.shardRoutingTable(RoutingTable.java:137) ~ .
[elasticsearch-5.6.8.jar:5.6.8] at
org.elasticsearch.action.support.replication.TransportReplicationAction
$ReroutePhase.primary(TransportReplicationAction.java:745) ~[elasticsearch-5.6.8.jar:5.6.8]
at org.elasticsearch.action.support.replication.TransportReplicationAction
$ReroutePhase.doRun(TransportReplicationAction.java:680) ~[elasticsearch-5.6.8.jar:5.6.8]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~
[elasticsearch-5.6.8.jar:5.6.8]
at org.elasticsearch.action.support.replication.TransportReplicationAction
$ReroutePhase$2.onNewClusterState(TransportReplicationAction.java:835) ~[elasticsearch-
5.6.8.jar:5.6.8]
at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.
onNewClusterState(ClusterStateObserver.java:297) ~[elasticsearch-5.6.8.jar:5.6.8]
at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.
clusterChanged(ClusterStateObserver.java:185) ~[elasticsearch-5.6.8.jar:5.6.8]
at org.elasticsearch.cluster.service.ClusterService.lambda$publishAndApplyChanges
$7(ClusterService.java:777) ~[elasticsearch-5.6.8.jar:5.6.8]
at java.util.concurrent.ConcurrentHashMap$KeySpliterator.forEachRemaining
(ConcurrentHashMap.java:3527) [?:1.8.0_181]
at java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:743) [?:1.8.0_181]
at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580) [?:1.8.0_181]
at org.elasticsearch.cluster.service.ClusterService.publishAndApplyChanges
(ClusterService.java:774) [elasticsearch-5.6.8.jar:5.6.8]
at org.elasticsearch.cluster.service.ClusterService.runTasks(ClusterService.java:587)
[elasticsearch-5.6.8.jar:5.6.8]
at
org.elasticsearch.cluster.service.ClusterService$ClusterServiceTaskBatcher.
run(ClusterService.java:263) [elasticsearch-5.6.8.jar:5.6.8]
at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150)
[elasticsearch-5.6.8.jar:5.6.8]
at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188)
[elasticsearch-5.6.8.jar:5.6.8]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run
(ThreadContext.java:575) [elasticsearch-5.6.8.jar:5.6.8]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor
$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:247)
[elasticsearch-5.6.8.jar:5.6.8]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor
$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:210) [elasticsearch-
5.6.8.jar:5.6.8]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[?:1.8.0_181]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
[2019-04-06T15:35:06,028][WARN ][o.e.a.b.TransportShardBulkAction] [elasticsearch-elastic-vm-0]
unexpected error during the primary phase for action [indices:data/write/bulk[s]], request
[BulkShardRequest [[be][1]] containing [468] requests]
org.elasticsearch.index.IndexNotFoundException: no such index
at org.elasticsearch.cluster.routing.RoutingTable.shardRoutingTable(RoutingTable.java:137) ~ .
[elasticsearch-5.6.8.jar:5.6.8]
at
org.elasticsearch.action.support.replication.TransportReplicationAction
$ReroutePhase.primary(TransportReplicationAction.java:745) ~[elasticsearch-5.6.8.jar:5.6.8]
at org.elasticsearch.action.support.replication.TransportReplicationAction
$ReroutePhase.doRun(TransportReplicationAction.java:680) ~[elasticsearch-5.6.8.jar:5.6.8]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~
[elasticsearch-5.6.8.jar:5.6.8]
at org.elasticsearch.action.support.replication.TransportReplicationAction
$ReroutePhase$2.onNewClusterState(TransportReplicationAction.java:835) ~[elasticsearch-
5.6.8.jar:5.6.8]
(TransportReplicationAction.java:835) ~[elasticsearch-5.6.8.jar:5.6.8]
at
org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.
onNewClusterState(ClusterStateObserver.java:297) ~[elasticsearch-5.6.8.jar:5.6.8]
at
org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.
clusterChanged(ClusterStateObserver.java:185) ~[elasticsearch-5.6.8.jar:5.6.8]
at org.elasticsearch.cluster.service.ClusterService.lambda$publishAndApplyChanges
$7(ClusterService.java:777) ~[elasticsearch-5.6.8.jar:5.6.8]
at java.util.concurrent.ConcurrentHashMap$KeySpliterator.forEachRemaining
(ConcurrentHashMap.java:3527) [?:1.8.0_181]
at java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:743) [?:1.8.0_181]
at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580) [?:1.8.0_181]
at
org.elasticsearch.cluster.service.ClusterService.publishAndApplyChanges(ClusterService.java:774)
[elasticsearch-5.6.8.jar:5.6.8]
at org.elasticsearch.cluster.service.ClusterService.runTasks(ClusterService.java:587) [elasticsearch-5.6.8.jar:5.6.8]
at org.elasticsearch.cluster.service.ClusterService$ClusterServiceTaskBatcher.run(ClusterService.java:263) [elasticsearch-5.6.8.jar:5.6.8]
at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-5.6.8.jar:5.6.8]
at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-5.6.8.jar:5.6.8]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575) [elasticsearch-5.6.8.jar:5.6.8]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:247) [elasticsearch-5.6.8.jar:5.6.8]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:210) [elasticsearch-5.6.8.jar:5.6.8]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181]
Pew, thats a lot of logs I hope I got all the relevant ones.
I'm quite new to elasticsearch, but it seems to be some issue with the bulk-import, that it can't find the index and then creates a new one?
Yes, particularly elasticsearch-cluster.log and the ones of the form elasticsearch-cluster-YYYY-MM-DD.log. Are you saying that they only contain this unexpected error during the primary phase, on all three of your nodes?
Thanks for the help! I'm the only one with access to the cluster, so what could trigger a delete of an Index like that? Am I loading too much data too fast?
Nope, no external IP:s. It's on GCP within the VPC(virtual private network) and I access it directly from Appengine. If I need to get into the machines, I spin up a bastion-host to SSH into the machine with ES.
No, Elasticsearch definitely doesn't delete anything unless explicitly instructed. From Elasticsearch's point of view there must be something calling DELETE /be (or the transport client equivalent) or using the remove_index command of the Index Aliases API.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.