Search_phase_execution_exception error with all_shared failes

hi team
i am facing this search_phase_execution_exception

Please find the details.
curl -X GET "localhost:9200/_cluster/health?filter_path=status,*_shards&pretty"
{
"status" : "red",
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 87,
"delayed_unassigned_shards" : 0
}

curl -X GET "localhost:9200/_cat/shards?v=true&h=index,shard,prirep,state,node,unassigned.reason&s=state&pretty"
index shard prirep state node unassigned.reason
casa 4 p UNASSIGNED ALLOCATION_FAILED
casa 4 r UNASSIGNED CLUSTER_RECOVERED
casa 1 p UNASSIGNED ALLOCATION_FAILED
casa 1 r UNASSIGNED CLUSTER_RECOVERED
casa 3 p UNASSIGNED ALLOCATION_FAILED
casa 3 r UNASSIGNED CLUSTER_RECOVERED
casa 2 p UNASSIGNED ALLOCATION_FAILED
casa 2 r UNASSIGNED CLUSTER_RECOVERED
casa 0 p UNASSIGNED ALLOCATION_FAILED
casa 0 r UNASSIGNED CLUSTER_RECOVERED
datainventory 0 p UNASSIGNED ALLOCATION_FAILED
datainventory 0 r UNASSIGNED CLUSTER_RECOVERED
dbcount 0 p UNASSIGNED ALLOCATION_FAILED
strong textdbcount 0 r UNASSIGNED CLUSTER_RECOVERED
generate_subnet_usage 0 p UNASSIGNED ALLOCATION_FAILED

curl http://localhost:9200/poddata/poddetails/_search?size=100
{"error":{"root_cause":,"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":true,"failed_shards":},"status":503}

It looks like all shards are unassigned. Has something happened to your storage?

Which version of Elasticsearch are you using?

Where is this cluster deployed?

What type of hardware and storage are you using?

What is the size and specification of the cluster?

Can you also please share the Elasticsearch logs.

elasticsearch version
"number" : "6.6.2",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "3bd3e59",
"build_date" : "2019-03-06T15:16:26.864148Z",
"build_snapshot" : false,
"lucene_version" : "7.6.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"

cluster deployed on linux VM
Hard ware 32 GM ram

{
"_nodes" : {
"total" : 1,
"successful" : 1,
"failed" : 0
},
"cluster_name" : "elasticsearch",
"cluster_uuid" : "PnFXlHnwRcKp9fpOo9F-cA",
"timestamp" : 1684908616554,
"status" : "red",
"indices" : {
"count" : 0,
"shards" : { },
"docs" : {
"count" : 0,
"deleted" : 0
},
"store" : {
"size_in_bytes" : 0
},
"fielddata" : {
"memory_size_in_bytes" : 0,
"evictions" : 0
},
"query_cache" : {
"memory_size_in_bytes" : 0,
"total_count" : 0,
"hit_count" : 0,
"miss_count" : 0,
"cache_size" : 0,
"cache_count" : 0,
"evictions" : 0
},
"completion" : {
"size_in_bytes" : 0
},
"segments" : {
"count" : 0,
"memory_in_bytes" : 0,
"terms_memory_in_bytes" : 0,
"stored_fields_memory_in_bytes" : 0,
"term_vectors_memory_in_bytes" : 0,
"norms_memory_in_bytes" : 0,
"points_memory_in_bytes" : 0,
"doc_values_memory_in_bytes" : 0,
"index_writer_memory_in_bytes" : 0,
"version_map_memory_in_bytes" : 0,
"fixed_bit_set_memory_in_bytes" : 0,
"max_unsafe_auto_id_timestamp" : -9223372036854775808,
"file_sizes" : { }
}
},
"nodes" : {
"count" : {
"total" : 1,
"data" : 1,
"coordinating_only" : 0,
"master" : 1,
"ingest" : 1
},
"versions" : [
"6.6.2"
],
"os" : {
"available_processors" : 12,
"allocated_processors" : 12,
"names" : [
{
"name" : "Linux",
"count" : 1
}
],
"pretty_names" : [
{
"pretty_name" : "Linux Server 7.2",
"count" : 1
}
],
"mem" : {
"total_in_bytes" : 64427245568,
"free_in_bytes" : 50525073408,
"used_in_bytes" : 13902172160,
"free_percent" : 78,
"used_percent" : 22
}
},
"process" : {
"cpu" : {
"percent" : 0
},
"open_file_descriptors" : {
"min" : 325,
"max" : 325,
"avg" : 325
}
},
"jvm" : {
"max_uptime_in_millis" : 71188524,
"versions" : [
{
"version" : "1.8.0_201",
"vm_name" : "OpenJDK 64-Bit Server VM",
"vm_version" : "25.201-b09",
"vm_vendor" : "Linux",
"count" : 1
}
],
"mem" : {
"heap_used_in_bytes" : 513874816,
"heap_max_in_bytes" : 1037959168
},
"threads" : 68
},
"fs" : {
"total_in_bytes" : 375571546112,
"free_in_bytes" : 249710772224,
"available_in_bytes" : 249710772224
},
"plugins" : ,
"network_types" : {
"transport_types" : {
"security4" : 1
},
"http_types" : {
"security4" : 1
}
}
}
}

i added heap size but istill getting same error

This is very old and has been EOL for a long time. I would strongly recommend upgrading.

That is a very small heap size for Elasticsearch. Given the resources available on the host where it is running I would recommend increasing to, e.g. to 2GB.

Something has happened to your cluster so it is important to share the logs so we can see what Elasticsearch is unhappy about.

[2023-05-24T06:15:53,818][DEBUG][o.e.a.s.TransportSearchAction] [g3GaOxv] All shards failed for phase: [query]
[2023-05-24T06:15:53,818][WARN ][r.suppressed ] [g3GaOxv] path: /exadatanodedbcount/exadatanodedbcountdetails/_search, params: {size=1, index=exadatanodedbcount, type=exadatanodedbcountdetails}
org.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed
at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:293) ~[elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:133) ~[elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseDone(AbstractSearchAsyncAction.java:254) ~[elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.action.search.InitialSearchPhase.onShardFailure(InitialSearchPhase.java:101) ~[elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.action.search.InitialSearchPhase.lambda$performPhaseOnShard$1(InitialSearchPhase.java:209) ~[elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.action.search.InitialSearchPhase$1.doRun(InitialSearchPhase.java:188) [elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:759) [elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:41) [elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.6.2.jar:6.6.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_201]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_201]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_201]
[2023-05-24T06:15:56,824][DEBUG][o.e.a.s.TransportSearchAction] [g3GaOxv] All shards failed for phase: [query]
[2023-05-24T06:15:56,824][WARN ][r.suppressed ] [g3GaOxv] path: /exadatanodedbcount/exadatanodedbcountdetails/_search, params: {size=1, index=exadatanodedbcount, type=exadatanodedbcountdetails}
org.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed
at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:293) ~[elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:133) ~[elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseDone(AbstractSearchAsyncAction.java:254) ~[elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.action.search.InitialSearchPhase.onShardFailure(InitialSearchPhase.java:101) ~[elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.action.search.InitialSearchPhase.lambda$performPhaseOnShard$1(InitialSearchPhase.java:209) ~[elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.action.search.InitialSearchPhase$1.doRun(InitialSearchPhase.java:188) [elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:759) [elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:41) [elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.6.2.jar:6.6.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_201]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_201]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_201]
[2023-05-24T06:16:03,834][DEBUG][o.e.a.s.TransportSearchAction] [g3GaOxv] All shards failed for phase: [query]
[2023-05-24T06:16:03,834][WARN ][r.suppressed ] [g3GaOxv] path: /exadatanodedbcount/exadatanodedbcountdetails/_search, params: {size=1, index=exadatanodedbcount, type=exadatanodedbcountdetails}
org.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed
at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:293) ~[elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:133) ~[elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseDone(AbstractSearchAsyncAction.java:254) ~[elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.action.search.InitialSearchPhase.onShardFailure(InitialSearchPhase.java:101) ~[elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.action.search.InitialSearchPhase.lambda$performPhaseOnShard$1(InitialSearchPhase.java:209) ~[elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.action.search.InitialSearchPhase$1.doRun(InitialSearchPhase.java:188) [elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:759) [elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:41) [elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.6.2.jar:6.6.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_201]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_201]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_201]

i increase the heaop size still getting same erorr

"mem" : {
"total_in_bytes" : 64427245568,
"free_in_bytes" : 50994049024,
"used_in_bytes" : 13433196544,
"free_percent" : 79,
"used_percent" : 21
}
},

I increasse the heap size still same issue
"mem" : {
"total_in_bytes" : 64427245568,
"free_in_bytes" : 50994049024,
"used_in_bytes" : 13433196544,
"free_percent" : 79,
"used_percent" : 21
}
},

sorry for interrupting this but can you help me too because i have a similar problem this is my form Service unavailable error code 503 all shard failed

[2023-05-24T08:00:42,619][DEBUG][o.e.a.s.TransportSearchAction] [g3GaOxv] All shards failed for phase: [query]
[2023-05-24T08:00:42,619][WARN ][r.suppressed             ] [g3GaOxv] path: /exadatanodedbcount/exadatanodedbcountdetails/_search, params: {size=1, index=exadatanodedbcount, type=exadatanodedbcountdetails}
org.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed
	at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:293) ~[elasticsearch-6.6.2.jar:6.6.2]
	at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:133) ~[elasticsearch-6.6.2.jar:6.6.2]
	at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseDone(AbstractSearchAsyncAction.java:254) ~[elasticsearch-6.6.2.jar:6.6.2]
	at org.elasticsearch.action.search.InitialSearchPhase.onShardFailure(InitialSearchPhase.java:101) ~[elasticsearch-6.6.2.jar:6.6.2]
	at org.elasticsearch.action.search.InitialSearchPhase.lambda$performPhaseOnShard$1(InitialSearchPhase.java:209) ~[elasticsearch-6.6.2.jar:6.6.2]
	at org.elasticsearch.action.search.InitialSearchPhase$1.doRun(InitialSearchPhase.java:188) [elasticsearch-6.6.2.jar:6.6.2]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:759) [elasticsearch-6.6.2.jar:6.6.2]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.6.2.jar:6.6.2]
	at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:41) [elasticsearch-6.6.2.jar:6.6.2]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.6.2.jar:6.6.2]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_201]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_201]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_201]
[2023-05-24T08:00:45,627][DEBUG][o.e.a.s.TransportSearchAction] [g3GaOxv] All shards failed for phase: [query]
[2023-05-24T08:00:45,627][WARN ][r.suppressed             ] [g3GaOxv] path: /exadatanodedbcount/exadatanodedbcountdetails/_search, params: {size=1, index=exadatanodedbcount, type=exadatanodedbcountdetails}
org.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed
	at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:293) ~[elasticsearch-6.6.2.jar:6.6.2]
	at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:133) ~[elasticsearch-6.6.2.jar:6.6.2]
	at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseDone(AbstractSearchAsyncAction.java:254) ~[elasticsearch-6.6.2.jar:6.6.2]
	at org.elasticsearch.action.search.InitialSearchPhase.onShardFailure(InitialSearchPhase.java:101) ~[elasticsearch-6.6.2.jar:6.6.2]
	at org.elasticsearch.action.search.InitialSearchPhase.lambda$performPhaseOnShard$1(InitialSearchPhase.java:209) ~[elasticsearch-6.6.2.jar:6.6.2]
	at org.elasticsearch.action.search.InitialSearchPhase$1.doRun(InitialSearchPhase.java:188) [elasticsearch-6.6.2.jar:6.6.2]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:759) [elasticsearch-6.6.2.jar:6.6.2]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.6.2.jar:6.6.2]
	at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:41) [elasticsearch-6.6.2.jar:6.6.2]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.6.2.jar:6.6.2]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_201]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_201]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_201]
[2023-05-24T08:00:52,639][DEBUG][o.e.a.s.TransportSearchAction] [g3GaOxv] All shards failed for phase: [query]
[2023-05-24T08:00:52,639][WARN ][r.suppressed             ] [g3GaOxv] path: /exadatanodedbcount/exadatanodedbcountdetails/_search, params: {size=1, index=exadatanodedbcount, type=exadatanodedbcountdetails}
org.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed
	at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:293) ~[elasticsearch-6.6.2.jar:6.6.2]
	at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:133) ~[elasticsearch-6.6.2.jar:6.6.2]
	at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseDone(AbstractSearchAsyncAction.java:254) ~[elasticsearch-6.6.2.jar:6.6.2]
	at org.elasticsearch.action.search.InitialSearchPhase.onShardFailure(InitialSearchPhase.java:101) ~[elasticsearch-6.6.2.jar:6.6.2]
	at org.elasticsearch.action.search.InitialSearchPhase.lambda$performPhaseOnShard$1(InitialSearchPhase.java:209) ~[elasticsearch-6.6.2.jar:6.6.2]
	at org.elasticsearch.action.search.InitialSearchPhase$1.doRun(InitialSearchPhase.java:188) [elasticsearch-6.6.2.jar:6.6.2]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:759) [elasticsearch-6.6.2.jar:6.6.2]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.6.2.jar:6.6.2]
	at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:41) [elasticsearch-6.6.2.jar:6.6.2]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.6.2.jar:6.6.2]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_201]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_201]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_201]

Alrady increase JVM memory size

We will need to see the full log from startup to see why the shards are the way they are.

[2023-05-24T08:22:33,218][WARN ][o.e.b.Natives ] [unknown] unable to load JNA native support library, native methods will be disabled.
java.lang.UnsatisfiedLinkError: /tmp/elasticsearch-2925505177850559673/jna--1985354563/jna8894232920475473689.tmp: /tmp/elasticsearch-2925505177850559673/jna--1985354563/jna8894232920475473689.tmp: failed to map segment from shared object: Operation not permitted
at java.lang.ClassLoader$NativeLibrary.load(Native Method) ~[?:1.8.0_201]
at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941) ~[?:1.8.0_201]
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824) ~[?:1.8.0_201]
at java.lang.Runtime.load0(Runtime.java:809) ~[?:1.8.0_201]
at java.lang.System.load(System.java:1086) ~[?:1.8.0_201]
at com.sun.jna.Native.loadNativeDispatchLibraryFromClasspath(Native.java:947) ~[jna-4.5.1.jar:4.5.1 (b0)]
at com.sun.jna.Native.loadNativeDispatchLibrary(Native.java:922) ~[jna-4.5.1.jar:4.5.1 (b0)]
at com.sun.jna.Native.(Native.java:190) ~[jna-4.5.1.jar:4.5.1 (b0)]
at java.lang.Class.forName0(Native Method) ~[?:1.8.0_201]
at java.lang.Class.forName(Class.java:264) ~[?:1.8.0_201]
at org.elasticsearch.bootstrap.Natives.(Natives.java:45) [elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:102) [elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:170) [elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:333) [elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159) [elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:150) [elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) [elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124) [elasticsearch-cli-6.6.2.jar:6.6.2]
at org.elasticsearch.cli.Command.main(Command.java:90) [elasticsearch-cli-6.6.2.jar:6.6.2]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:116) [elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:93) [elasticsearch-6.6.2.jar:6.6.2]
[2023-05-24T08:22:33,226][WARN ][o.e.b.Natives ] [unknown] cannot check if running as root because JNA is not available
[2023-05-24T08:22:33,226][WARN ][o.e.b.Natives ] [unknown] cannot install system call filter because JNA is not available
[2023-05-24T08:22:33,227][WARN ][o.e.b.Natives ] [unknown] cannot register console handler because JNA is not available
[2023-05-24T08:22:33,228][WARN ][o.e.b.Natives ] [unknown] cannot getrlimit RLIMIT_NPROC because JNA is not available
[2023-05-24T08:22:33,229][WARN ][o.e.b.Natives ] [unknown] cannot getrlimit RLIMIT_AS because JNA is not available
[2023-05-24T08:22:33,229][WARN ][o.e.b.Natives ] [unknown] cannot getrlimit RLIMIT_FSIZE because JNA is not available
[2023-05-24T08:22:33,512][INFO ][o.e.e.NodeEnvironment ] [g3GaOxv] using [1] data paths, mounts [[/u01, net usable_space [232.5gb], net total_space [349.7gb], types [nfs]
[2023-05-24T08:22:33,513][INFO ][o.e.e.NodeEnvironment ] [g3GaOxv] heap size [989.8mb], compressed ordinary object pointers [true]
[2023-05-24T08:22:34,579][INFO ][o.e.n.Node ] [g3GaOxv] node name derived from node ID [g3GaOxvHRgWrWPK5ryD8Ng]; set [node.name] to override
[2023-05-24T08:22:34,579][INFO ][o.e.n.Node ] [g3GaOxv] version[6.6.2], pid[18066], build[default/rpm/3bd3e59/2019-03-06T15:16:26.864148Z], OS[Linux/4.1.12-124.14.2.el7uek.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_201/25.201-b09]
[2023-05-24T08:22:34,579][INFO ][o.e.n.Node ] [g3GaOxv] JVM arguments [-Xms2g, -Xmx4g, -Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch-2925505177850559673, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -Xloggc:/var/log/elasticsearch/gc.log, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=32, -XX:GCLogFileSize=64m, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/etc/elasticsearch, -Des.distribution.flavor=default, -Des.distribution.type=rpm]
[2023-05-24T08:22:36,699][INFO ][o.e.p.PluginsService ] [g3GaOxv] loaded module [aggs-matrix-stats]
[2023-05-24T08:22:36,699][INFO ][o.e.p.PluginsService ] [g3GaOxv] loaded module [analysis-common]
[2023-05-24T08:22:36,699][INFO ][o.e.p.PluginsService ] [g3GaOxv] loaded module [ingest-common]
[2023-05-24T08:22:36,699][INFO ][o.e.p.PluginsService ] [g3GaOxv] loaded module [lang-expression]
[2023-05-24T08:22:36,699][INFO ][o.e.p.PluginsService ] [g3GaOxv] loaded module [lang-mustache]
[2023-05-24T08:22:36,699][INFO ][o.e.p.PluginsService ] [g3GaOxv] loaded module [lang-painless]
[2023-05-24T08:22:36,700][INFO ][o.e.p.PluginsService ] [g3GaOxv] loaded module [mapper-extras]
[2023-05-24T08:22:36,700][INFO ][o.e.p.PluginsService ] [g3GaOxv] loaded module [parent-join]
[2023-05-24T08:22:36,700][INFO ][o.e.p.PluginsService ] [g3GaOxv] loaded module [percolator]
[2023-05-24T08:22:36,700][INFO ][o.e.p.PluginsService ] [g3GaOxv] loaded module [rank-eval]
[2023-05-24T08:22:36,700][INFO ][o.e.p.PluginsService ] [g3GaOxv] loaded module [reindex]
[2023-05-24T08:22:36,700][INFO ][o.e.p.PluginsService ] [g3GaOxv] loaded module [repository-url]
[2023-05-24T08:22:36,700][INFO ][o.e.p.PluginsService ] [g3GaOxv] loaded module [transport-netty4]
[2023-05-24T08:22:36,700][INFO ][o.e.p.PluginsService ] [g3GaOxv] loaded module [tribe]
[2023-05-24T08:22:36,700][INFO ][o.e.p.PluginsService ] [g3GaOxv] loaded module [x-pack-ccr]
[2023-05-24T08:22:36,700][INFO ][o.e.p.PluginsService ] [g3GaOxv] loaded module [x-pack-core]
[2023-05-24T08:22:36,700][INFO ][o.e.p.PluginsService ] [g3GaOxv] loaded module [x-pack-deprecation]
[2023-05-24T08:22:36,700][INFO ][o.e.p.PluginsService ] [g3GaOxv] loaded module [x-pack-graph]
[2023-05-24T08:22:36,700][INFO ][o.e.p.PluginsService ] [g3GaOxv] loaded module [x-pack-ilm]
[2023-05-24T08:22:36,700][INFO ][o.e.p.PluginsService ] [g3GaOxv] loaded module [x-pack-logstash]
[2023-05-24T08:22:36,701][INFO ][o.e.p.PluginsService ] [g3GaOxv] loaded module [x-pack-ml]
[2023-05-24T08:22:36,701][INFO ][o.e.p.PluginsService ] [g3GaOxv] loaded module [x-pack-monitoring]
[2023-05-24T08:22:36,701][INFO ][o.e.p.PluginsService ] [g3GaOxv] loaded module [x-pack-rollup]
[2023-05-24T08:22:36,701][INFO ][o.e.p.PluginsService ] [g3GaOxv] loaded module [x-pack-security]
[2023-05-24T08:22:36,701][INFO ][o.e.p.PluginsService ] [g3GaOxv] loaded module [x-pack-sql]
[2023-05-24T08:22:36,701][INFO ][o.e.p.PluginsService ] [g3GaOxv] loaded module [x-pack-upgrade]
[2023-05-24T08:22:36,701][INFO ][o.e.p.PluginsService ] [g3GaOxv] loaded module [x-pack-watcher]
[2023-05-24T08:22:36,701][INFO ][o.e.p.PluginsService ] [g3GaOxv] no plugins loaded
[2023-05-24T08:22:41,513][INFO ][o.e.x.s.a.s.FileRolesStore] [g3GaOxv] parsed [0] roles from file [/etc/elasticsearch/roles.yml]
[2023-05-24T08:22:42,040][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [g3GaOxv] [controller/18171] [Main.cc@109] controller (64 bit): Version 6.6.2 (Build 62531230b275d3) Copyright (c) 2019 Elasticsearch BV
[2023-05-24T08:22:42,644][DEBUG][o.e.a.ActionModule ] [g3GaOxv] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security
[2023-05-24T08:22:43,126][INFO ][o.e.d.DiscoveryModule ] [g3GaOxv] using discovery type [zen] and host providers [settings]
[2023-05-24T08:22:44,126][INFO ][o.e.n.Node ] [g3GaOxv] initialized
[2023-05-24T08:22:44,126][INFO ][o.e.n.Node ] [g3GaOxv] starting ...
[2023-05-24T08:22:44,297][INFO ][o.e.t.TransportService ] [g3GaOxv] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
[2023-05-24T08:22:44,409][WARN ][o.e.b.BootstrapChecks ] [g3GaOxv] system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk
[2023-05-24T08:22:47,465][INFO ][o.e.c.s.MasterService ] [g3GaOxv] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {g3GaOxv}{g3GaOxvHRgWrWPK5ryD8Ng}{mTT6OgACRS6kQtGk8nQriw}{localhost}{127.0.0.1:9300}{ml.machine_memory=64427245568, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}
[2023-05-24T08:22:47,473][INFO ][o.e.c.s.ClusterApplierService] [g3GaOxv] new_master {g3GaOxv}{g3GaOxvHRgWrWPK5ryD8Ng}{mTT6OgACRS6kQtGk8nQriw}{localhost}{127.0.0.1:9300}{ml.machine_memory=64427245568, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}, reason: apply cluster state (from master [master {g3GaOxv}{g3GaOxvHRgWrWPK5ryD8Ng}{mTT6OgACRS6kQtGk8nQriw}{localhost}{127.0.0.1:9300}{ml.machine_memory=64427245568, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2023-05-24T08:22:47,532][INFO ][o.e.h.n.Netty4HttpServerTransport] [g3GaOxv] publish_address {10.199.6.99:9200}, bound_addresses {0.0.0.0:9200}
[2023-05-24T08:22:47,534][INFO ][o.e.n.Node ] [g3GaOxv] started
[2023-05-24T08:22:48,577][INFO ][o.e.l.LicenseService ] [g3GaOxv] license [a4817da0-aac1-492e-b78c-d8ac6b365b51] mode [basic] - valid
[2023-05-24T08:22:48,592][INFO ][o.e.g.GatewayService ] [g3GaOxv] recovered [26] indices into cluster_state
[2023-05-24T08:22:48,946][WARN ][o.e.i.e.Engine ] [g3GaOxv] [deleteanddeepclean][0] could not lock IndexWriter
org.apache.lucene.store.LockObtainFailedException: Lock held by another program: /u01/elasticsearch/nodes/0/indices/kGZBwqnoRvC7UxChnoYPGQ/0/index/write.lock
at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:130) ~[lucene-core-7.6.0.jar:7.6.0 719cde97f84640faa1e3525690d262946571245f - nknize - 2018-12-07 14:44:20]
at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-7.6.0.jar:7.6.0 719cde97f84640faa1e3525690d262946571245f - nknize - 2018-12-07 14:44:20]
at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-7.6.0.jar:7.6.0 719cde97f84640faa1e3525690d262946571245f - nknize - 2018-12-07 14:44:20]
at org.apache.lucene.store.FilterDirectory.obtainLock(FilterDirectory.java:105) ~[lucene-core-7.6.0.jar:7.6.0 719cde97f84640faa1e3525690d262946571245f - nknize - 2018-12-07 14:44:20]
at org.apache.lucene.store.FilterDirectory.obtainLock(FilterDirectory.java:105) ~[lucene-core-7.6.0.jar:7.6.0 719cde97f84640faa1e3525690d262946571245f - nknize - 2018-12-07 14:44:20]
at org.apache.lucene.index.IndexWriter.(IndexWriter.java:727) ~[lucene-core-7.6.0.jar:7.6.0 719cde97f84640faa1e3525690d262946571245f - nknize - 2018-12-07 14:44:20]
at org.elasticsearch.index.engine.InternalEngine.createWriter(InternalEngine.java:2199) ~[elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.index.engine.InternalEngine.createWriter(InternalEngine.java:2187) ~[elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.index.engine.InternalEngine.(InternalEngine.java:203) ~[elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.index.engine.InternalEngine.(InternalEngine.java:168) ~[elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.index.engine.InternalEngineFactory.newReadWriteEngine(InternalEngineFactory.java:25) ~[elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.index.shard.IndexShard.innerOpenEngineAndTranslog(IndexShard.java:1446) ~[elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.index.shard.IndexShard.openEngineAndRecoverFromTranslog(IndexShard.java:1400) ~[elasticsearch-6.6.2.jar:6.6.2]

Any update on this i added log from scratch

@ Christian_Dahlqvis
Any update on this issue???

What operating system and platform are you running this on?

What type of storage are you using? It does seem like some other process is locking the files Elasticsearch needs. Are you using some form of shared storage, e.g. NFS?

Operating System :-Oracle Linux
We are using sharing storage NFS

NFS is generally not recommended for Elasticsearch as it can be very slow for an I/O intensive application like Elasticsearch and has to be mounted correctly for it to work. I suspect the use of NFS is causing the problems you are seeing. Have a look at the docs I linked to for further details.

Yes Actually we have limited resource
so tharts why we are using NFS shared memory.

is there way we can receove he data one time

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.