Elasticsearch Frequently Failed and Slows Down After Upgrade to 6.4

Hi,

I have this problem where elasticsearch is frequently failed and shard allocation after restart is very slow after upgrading to 6.4 from 6.2.

[2018-09-13T17:26:23,546][WARN ][o.e.i.c.IndicesClusterStateService] [ELK1-preprod] [[clientzone_signaturepos-2018.08.24][2]] marking and sending shard failed due to [failed recovery]
org.elasticsearch.indices.recovery.RecoveryFailedException: [clientzone_signaturepos-2018.08.24][2]: Recovery failed from {ELK2-preprod}{X9uh1liyTGK2LWyMGv3BOA}{Ic15dOerSW6yGmygNTE-zw}{10.56.20.95}{10.56.20.95:9300}{ml.machine_memory=33568194560, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true} into {ELK1-preprod}{ykLXYsriTPyQkkOQVKK41A}{GG-Vbx6lS1GHblvUE-QrNQ}{10.56.20.94}{10.56.20.94:9300}{ml.machine_memory=33568194560, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}
	at org.elasticsearch.indices.recovery.PeerRecoveryTargetService.doRecovery(PeerRecoveryTargetService.java:282) [elasticsearch-6.4.0.jar:6.4.0]
	at org.elasticsearch.indices.recovery.PeerRecoveryTargetService.access$900(PeerRecoveryTargetService.java:80) [elasticsearch-6.4.0.jar:6.4.0]
	at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$RecoveryRunner.doRun(PeerRecoveryTargetService.java:623) [elasticsearch-6.4.0.jar:6.4.0]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:723) [elasticsearch-6.4.0.jar:6.4.0]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.4.0.jar:6.4.0]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_161]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_161]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]
Caused by: org.elasticsearch.transport.RemoteTransportException: [ELK2-preprod][10.56.20.95:9300][internal:index/shard/recovery/start_recovery]
Caused by: org.elasticsearch.index.engine.RecoveryEngineException: Phase[1] prepare target for translog failed
	at org.elasticsearch.indices.recovery.RecoverySourceHandler.recoverToTarget(RecoverySourceHandler.java:191) ~[elasticsearch-6.4.0.jar:6.4.0]
	at org.elasticsearch.indices.recovery.PeerRecoverySourceService.recover(PeerRecoverySourceService.java:98) ~[elasticsearch-6.4.0.jar:6.4.0]
	at org.elasticsearch.indices.recovery.PeerRecoverySourceService.access$000(PeerRecoverySourceService.java:50) ~[elasticsearch-6.4.0.jar:6.4.0]
	at org.elasticsearch.indices.recovery.PeerRecoverySourceService$StartRecoveryTransportRequestHandler.messageReceived(PeerRecoverySourceService.java:107) ~[elasticsearch-6.4.0.jar:6.4.0]
	at org.elasticsearch.indices.recovery.PeerRecoverySourceService$StartRecoveryTransportRequestHandler.messageReceived(PeerRecoverySourceService.java:104) ~[elasticsearch-6.4.0.jar:6.4.0]
	at org.elasticsearch.transport.TransportRequestHandler.messageReceived(TransportRequestHandler.java:30) ~[elasticsearch-6.4.0.jar:6.4.0]
	at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:251) ~[?:?]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.4.0.jar:6.4.0]
	at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:309) ~[?:?]
	at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:66) ~[elasticsearch-6.4.0.jar:6.4.0]
	at org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1605) ~[elasticsearch-6.4.0.jar:6.4.0]
	... 5 more
Caused by: org.elasticsearch.transport.RemoteTransportException: [ELK1-preprod][10.56.20.94:9300][internal:index/shard/recovery/prepare_translog]
Caused by: org.elasticsearch.index.engine.EngineCreationFailureException: failed to create engine
	at org.elasticsearch.index.engine.InternalEngine.<init>(InternalEngine.java:199) ~[elasticsearch-6.4.0.jar:6.4.0]
	at org.elasticsearch.index.engine.InternalEngine.<init>(InternalEngine.java:160) ~[elasticsearch-6.4.0.jar:6.4.0]
	at org.elasticsearch.index.engine.InternalEngineFactory.newReadWriteEngine(InternalEngineFactory.java:25) ~[elasticsearch-6.4.0.jar:6.4.0]
	... 5 more 
[2018-09-13T17:26:34,014][WARN ][o.e.g.DanglingIndicesState] [ELK1-    preprod] [[clientzone_integrationconnector-2018.07.16/ubGfO0sGSCqV8LZxmkxMVQ]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-09-13T17:26:34,014][WARN ][o.e.g.DanglingIndicesState] [ELK1-preprod] [[error-partnerzone_partner-2018.07.16/fzlLkW57T6iZfTUzAhDgtw]] can not be imported as a dangling index, as index with same name already exists in cluster metadata

Its been 7 hour and active_shards_percent_as_number is only 59% and cluster health is red.

Can someone give me advice on how to resolve this problem?

What is the output of the cluster stats API? How many indices/shards do you have in the cluster?

Hi,

One of my none just failed with this error :

[2018-09-13T17:40:05,492][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [ELK2-preprod] fatal error in thread [elasticsearch[ELK2-preprod][generic][T#3]], exitingjava.lang.StackOverflowError: null
at org.elasticsearch.common.io.stream.BytesStreamOutput.writeBytes(BytesStreamOutput.java:89) ~[elasticsearch-6.4.0.jar:6.4.0]
at org.elasticsearch.common.io.stream.StreamOutput.writeBytes(StreamOutput.java:172) ~[elasticsearch-6.4.0.jar:6.4.0]
at org.elasticsearch.common.io.stream.StreamOutput.writeString(StreamOutput.java:399) ~[elasticsearch-6.4.0.jar:6.4.0]
at org.elasticsearch.common.io.stream.StreamOutput.writeOptionalString(StreamOutput.java:320) ~[elasticsearch-6.4.0.jar:6.4.0]
at org.elasticsearch.common.io.stream.StreamOutput.writeException(StreamOutput.java:937) ~[elasticsearch-6.4.0.jar:6.4.0]
at org.elasticsearch.ElasticsearchException.writeStackTraces(ElasticsearchException.java:743) ~[elasticsearch-6.4.0.jar:6.4.0]
at org.elasticsearch.common.io.stream.StreamOutput.writeException(StreamOutput.java:942) ~[elasticsearch-6.4.0.jar:6.4.0]
at org.elasticsearch.common.io.stream.StreamOutput.writeException(StreamOutput.java:940) ~[elasticsearch-6.4.0.jar:6.4.0]
at org.elasticsearch.ElasticsearchException.writeStackTraces(ElasticsearchException.java:743) ~[elasticsearch-6.4.0.jar:6.4.0]

As for cluster stat, here it is :

{
  "_nodes" : {
    "total" : 2,
    "successful" : 2,
    "failed" : 0
  },
  "cluster_name" : "elastic-preprod",
  "timestamp" : 1536837474401,
  "status" : "red",
  "indices" : {
    "count" : 441,
    "shards" : {
      "total" : 2134,
      "primaries" : 2123,
      "replication" : 0.0051813471502590676,
      "index" : {
        "shards" : {
          "min" : 1,
          "max" : 9,
          "avg" : 4.839002267573696
        },
        "primaries" : {
          "min" : 1,
          "max" : 5,
          "avg" : 4.814058956916099
        },
        "replication" : {
          "min" : 0.0,
          "max" : 1.0,
          "avg" : 0.007709750566893424
        }
      }
    },
    "docs" : {
      "count" : 165795439,
      "deleted" : 392243
    },
    "store" : {
      "size" : "21.3gb",
      "size_in_bytes" : 22957601319
    },
    "fielddata" : {
      "memory_size" : "0b",
      "memory_size_in_bytes" : 0,
      "evictions" : 0
    },
    "query_cache" : {
      "memory_size" : "0b",
      "memory_size_in_bytes" : 0,
      "total_count" : 0,
      "hit_count" : 0,
      "miss_count" : 0,
      "cache_size" : 0,
      "cache_count" : 0,
      "evictions" : 0
    },
    "completion" : {
      "size" : "0b",
      "size_in_bytes" : 0
    },
    "segments" : {
      "count" : 9162,
      "memory" : "178.8mb",
      "memory_in_bytes" : 187552499,
      "terms_memory" : "144.8mb",
      "terms_memory_in_bytes" : 151834956,
      "stored_fields_memory" : "9.7mb",
      "stored_fields_memory_in_bytes" : 10178872,
      "term_vectors_memory" : "0b",
      "term_vectors_memory_in_bytes" : 0,
      "norms_memory" : "11.2mb",
      "norms_memory_in_bytes" : 11792512,
      "points_memory" : "7.5mb",
      "points_memory_in_bytes" : 7967983,
      "doc_values_memory" : "5.5mb",
      "doc_values_memory_in_bytes" : 5778176,
      "index_writer_memory" : "0b",
      "index_writer_memory_in_bytes" : 0,
      "version_map_memory" : "0b",
      "version_map_memory_in_bytes" : 0,
      "fixed_bit_set" : "332kb",
      "fixed_bit_set_memory_in_bytes" : 340032,
      "max_unsafe_auto_id_timestamp" : 1536834850703,
      "file_sizes" : { }
    }
  },
  "nodes" : {
    "count" : {
      "total" : 2,
      "data" : 2,
      "coordinating_only" : 0,
      "master" : 2,
      "ingest" : 2
    },
    "versions" : [
      "6.4.0"
    ],
    "os" : {
      "available_processors" : 16,
      "allocated_processors" : 16,
      "names" : [
        {
          "name" : "Linux",
          "count" : 2
        }
      ],
      "mem" : {
        "total" : "62.5gb",
        "total_in_bytes" : 67136389120,
        "free" : "1.6gb",
        "free_in_bytes" : 1794719744,
        "used" : "60.8gb",
        "used_in_bytes" : 65341669376,
        "free_percent" : 3,
        "used_percent" : 97
      }
    },
    "process" : {
      "cpu" : {
        "percent" : 17
      },
      "open_file_descriptors" : {
        "min" : 9053,
        "max" : 9783,
        "avg" : 9418
      }
    },
    "jvm" : {
      "max_uptime" : "9.9m",
      "max_uptime_in_millis" : 599770,
      "versions" : [
        {
          "version" : "1.8.0_161",
          "vm_name" : "Java HotSpot(TM) 64-Bit Server VM",
          "vm_version" : "25.161-b12",
          "vm_vendor" : "Oracle Corporation",
          "count" : 2
        }
      ],
      "mem" : {
        "heap_used" : "2.6gb",
        "heap_used_in_bytes" : 2807195992,
        "heap_max" : "27.8gb",
        "heap_max_in_bytes" : 29925310464
      },
      "threads" : 182
    },
    "fs" : {
      "total" : "393.4gb",
      "total_in_bytes" : 422487998464,
      "free" : "224.2gb",
      "free_in_bytes" : 240744771584,
      "available" : "208.1gb",
      "available_in_bytes" : 223445872640
    },
    "plugins" : [ ],
    "network_types" : {
      "transport_types" : {
        "security4" : 2
      },
      "http_types" : {
        "security4" : 2
      }
    }
  }
}

Thanks in advance!

That is a very large number of indices and shards given the data volume you have, which as you see will cause problems. Please read this blog post for some guidance on shards and sharding, and then try to reduce the number of shards you use dramatically, e.g. by reindexing your data.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.