Elasticsearch worked and then stopped

Elasticsearch worked and then stopped. It starts, runs for a minute and falls. In the logs, this is what.

"at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) ~[?:?]",
"at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) ~[?:?]",
"at java.lang.Thread.run(Thread.java:831) [?:?]"] }
fatal error in thread [elasticsearch[32af34a1fa61][write][T#1]], exiting
java.lang.OutOfMemoryError: Java heap space
        at org.apache.lucene.util.ByteBlockPool$DirectTrackingAllocator.getByteBlock(ByteBlockPool.java:105)
        at org.apache.lucene.util.ByteBlockPool.nextBuffer(ByteBlockPool.java:205)
        at org.apache.lucene.util.BytesRefHash.add(BytesRefHash.java:284)
        at org.apache.lucene.index.SortedSetDocValuesWriter.addOneValue(SortedSetDocValuesWriter.java:116)
        at org.apache.lucene.index.SortedSetDocValuesWriter.addValue(SortedSetDocValuesWriter.java:87)
        at org.apache.lucene.index.DefaultIndexingChain.indexDocValue(DefaultIndexingChain.java:721)
        at org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:561)
        at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:488)
        at org.apache.lucene.index.DocumentsWriterPerThread.updateDocuments(DocumentsWriterPerThread.java:208)
        at org.apache.lucene.index.DocumentsWriter.updateDocuments(DocumentsWriter.java:415)
        at org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1471)
        at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1757)
        at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1400)
        at org.elasticsearch.index.engine.InternalEngine.addDocs(InternalEngine.java:1190)
        at org.elasticsearch.index.engine.InternalEngine.indexIntoLucene(InternalEngine.java:1127)
        at org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:954)
        at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:885)
        at org.elasticsearch.index.shard.IndexShard.applyIndexOperation(IndexShard.java:847)
        at org.elasticsearch.index.shard.IndexShard.applyIndexOperationOnPrimary(IndexShard.java:804)
        at org.elasticsearch.action.bulk.TransportShardBulkAction.executeBulkItemRequest(TransportShardBulkAction.java:268)
        at org.elasticsearch.action.bulk.TransportShardBulkAction$2.doRun(TransportShardBulkAction.java:158)
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)
        at org.elasticsearch.action.bulk.TransportShardBulkAction.performOnPrimary(TransportShardBulkAction.java:203)
        at org.elasticsearch.action.bulk.TransportShardBulkAction.dispatchedShardOperationOnPrimary(TransportShardBulkAction.java:109)
        at o
"at io.netty.util.concurrent.DefaultPromise.setFailure(DefaultPromise.java:109) [netty-common-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.channel.DefaultChannelPromise.setFailure(DefaultChannelPromise.java:89) [netty-transport-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.channel.AbstractChannelHandlerContext.safeExecute(AbstractChannelHandlerContext.java:994) [netty-transport-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:796) [netty-transport-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:758) [netty-transport-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:1020) [netty-transport-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:299) [netty-transport-4.1.49.Final.jar:4.1.49.Final]",
"at org.elasticsearch.http.netty4.Netty4HttpChannel.sendResponse(Netty4HttpChannel.java:32) [transport-netty4-client-7.14.0.jar:7.14.0]",
"at org.elasticsearch.http.DefaultRestChannel.sendResponse(DefaultRestChannel.java:127) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.rest.RestController$ResourceHandlingHttpChannel.sendResponse(RestController.java:521) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.rest.action.RestResponseListener.processResponse(RestResponseListener.java:26) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.rest.action.RestActionListener.onResponse(RestActionListener.java:38) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:83) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:77) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.support.ContextPreservingActionListener.onResponse(ContextPreservingActionListener.java:31) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.lambda$applyInternal$2(SecurityActionFilter.java:163) [x-pack-security-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.ActionListener$DelegatingFailureActionListener.onResponse(ActionListener.java:217) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.ActionListener$RunBeforeActionListener.onResponse(ActionListener.java:387) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.ActionListener$MappedActionListener.onResponse(ActionListener.java:101) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.finishHim(TransportBulkAction.java:538) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.onFailure(TransportBulkAction.java:533) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.support.TransportAction$1.onFailure(TransportAction.java:92) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.support.ContextPreservingActionListener.onFailure(ContextPreservingActionListener.java:38) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.ActionListener$Delegating.onFailure(ActionListener.java:66) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.finishAsFailed(TransportReplicationAction.java:854) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.retry(TransportReplicationAction.java:826) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.retryBecauseUnavailable(TransportReplicationAction.java:885) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.doRun(TransportReplicationAction.java:735) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$2.onTimeout(TransportReplicationAction.java:845) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:324) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:241) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:590) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:673) [elasticsearch-7.14.0.jar:7.14.0]",
"at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]",
"at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]",
"at java.lang.Thread.run(Thread.java:831) [?:?]"] }

can you please add more context than a single stack trace around such an issue? Writing a problem statement requires some work.

  • Elasticsearch version
  • Operating system in use
  • Installed distribution
  • Operating system monitoring at the time of that issue
  • How much heap do you have configured?
  • Was the system under load?
  • Are you indexing data at that time?
  • How much data?
  • Are you running bulk requests? If so, which size do those have?

Thanks a lot!

1 Like

ELK Stack 7.14.0
elasticsearch 8.30

Deployed via docker

# docker version
Client:
 Version:           20.10.7
 API version:       1.41
 Go version:        go1.13.8
 Git commit:        20.10.7-0ubuntu1~20.04.1
 Built:             Wed Aug  4 22:52:25 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server:
 Engine:
  Version:          20.10.7
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.8
  Git commit:       20.10.7-0ubuntu1~20.04.1
  Built:            Wed Aug  4 19:07:47 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.5.2-0ubuntu1~20.04.2
  GitCommit:
 runc:
  Version:          1.0.0~rc95-0ubuntu1~20.04.2
  GitCommit:
 docker-init:
  Version:          0.19.0
  GitCommit:

Operating system inside the container: CentOS Linux release 8.4.2105

Configured on one server
There was no load
There was no data indexing

Thanks

Judging by the stack trace you pasted above, it contains

  at org.elasticsearch.action.bulk.TransportShardBulkAction.executeBulkItemRequest(TransportShardBulkAction.java:268)

this means there has to be some indexing happening (which could due to Elasticsearch, but still).

That said, you have not answered all the questions or nor shared all the logs.

Also, there is no 8.30 version of Elasticsearch. I suppose you were referring to something else here, can you specify?

Thank you!

1 Like
 "2021-08-30T12:15:14,999Z", "level": "WARN", "component": "o.e.m.j.JvmGcMonitorService", "cluster.name": "docker-cluster", "node.name": "0f862ee523a5", "message": "[gc][104] overhead, spent [1.2s] collecting in the last [1.3s]", "cluster.uuid": "eLdCbaZnQQassxF5rQzdZA", "node.id": "juZAo53AQM6d_9w4xdfRfA"  }
{"type": "server", "timestamp": "2021-08-30T12:15:15,074Z", "level": "ERROR", "component": "o.e.x.m.c.c.ClusterStatsCollector", "cluster.name": "docker-cluster", "node.name": "0f862ee523a5", "message": "collector [cluster_stats] failed to collect data", "cluster.uuid": "eLdCbaZnQQassxF5rQzdZA", "node.id": "juZAo53AQM6d_9w4xdfRfA" ,
"stacktrace": ["org.elasticsearch.action.search.SearchPhaseExecutionException: ",
"at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:661) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.search.FetchSearchPhase$1.onFailure(FetchSearchPhase.java:89) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:28) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:33) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:732) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]",
"at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]",
"at java.lang.Thread.run(Thread.java:831) [?:?]",
"Caused by: org.elasticsearch.common.breaker.CircuitBreakingException: [parent] Data too large, data for [<reduce_aggs>] would be [257852788/245.9mb], which is larger than the limit of [255013683/243.1mb], real usage: [257852696/245.9mb], new bytes reserved: [92/92b], usages [request=1190/1.1kb, fielddata=3246/3.1kb, in_flight_requests=0/0b, model_inference=0/0b, eql_sequence=0/0b, accounting=2994100/2.8mb]",
"at org.elasticsearch.indices.breaker.HierarchyCircuitBreakerService.checkParentLimit(HierarchyCircuitBreakerService.java:335) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.common.breaker.ChildMemoryCircuitBreaker.addEstimateBytesAndMaybeBreak(ChildMemoryCircuitBreaker.java:97) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.search.QueryPhaseResultConsumer$PendingMerges.addEstimateAndMaybeBreak(QueryPhaseResultConsumer.java:272) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.search.QueryPhaseResultConsumer.reduce(QueryPhaseResultConsumer.java:129) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.search.FetchSearchPhase.innerRun(FetchSearchPhase.java:98) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.search.FetchSearchPhase.access$000(FetchSearchPhase.java:36) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.search.FetchSearchPhase$1.doRun(FetchSearchPhase.java:84) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) ~[elasticsearch-7.14.0.jar:7.14.0]",
"... 6 more"] }
{"type": "server", "timestamp": "2021-08-30T12:15:16,128Z", "level": "WARN", "component": "o.e.m.j.JvmGcMonitorService", "cluster.name": "docker-cluster", "node.name": "0f862ee523a5", "message": "[gc][105] overhead, spent [971ms] collecting in the last [1.1s]", "cluster.uuid": "eLdCbaZnQQassxF5rQzdZA", "node.id": "juZAo53AQM6d_9w4xdfRfA"  }
{"type": "server", "timestamp": "2021-08-30T12:15:17,139Z", "level": "WARN", "component": "o.e.m.j.JvmGcMonitorService", "cluster.name": "docker-cluster", "node.name": "0f862ee523a5", "message": "[gc][106] overhead, spent [896ms] collecting in the last [1s]", "cluster.uuid": "eLdCbaZnQQassxF5rQzdZA", "node.id": "juZAo53AQM6d_9w4xdfRfA"  }
{"type": "server", "timestamp": "2021-08-30T12:15:17,158Z", "level": "INFO", "component": "o.e.x.s.a.AuthenticationService", "cluster.name": "docker-cluster", "node.name": "0f862ee523a5", "message": "Authentication of [elastic] was terminated by realm [reserved] - failed to authenticate user [elastic]", "cluster.uuid": "eLdCbaZnQQassxF5rQzdZA", "node.id": "juZAo53AQM6d_9w4xdfRfA"  }
{"type": "server", "timestamp": "2021-08-30T12:15:17,158Z", "level": "INFO", "component": "o.e.x.s.a.AuthenticationService", "cluster.name": "docker-cluster", "node.name": "0f862ee523a5", "message": "Authentication of [elastic] was terminated by realm [reserved] - failed to authenticate user [elastic]", "cluster.uuid": "eLdCbaZnQQassxF5rQzdZA", "node.id": "juZAo53AQM6d_9w4xdfRfA"  }
{"type": "server", "timestamp": "2021-08-30T12:15:18,240Z", "level": "WARN", "component": "o.e.m.j.JvmGcMonitorService", "cluster.name": "docker-cluster", "node.name": "0f862ee523a5", "message": "[gc][107] overhead, spent [982ms] collecting in the last [1.1s]", "cluster.uuid": "eLdCbaZnQQassxF5rQzdZA", "node.id": "juZAo53AQM6d_9w4xdfRfA"  }
{"type": "server", "timestamp": "2021-08-30T12:15:18,946Z", "level": "ERROR", "component": "o.e.x.m.c.m.JobStatsCollector", "cluster.name": "docker-cluster", "node.name": "0f862ee523a5", "message": "collector [job_stats] failed to collect data", "cluster.uuid": "eLdCbaZnQQassxF5rQzdZA", "node.id": "juZAo53AQM6d_9w4xdfRfA" ,
"stacktrace": ["org.elasticsearch.action.search.SearchPhaseExecutionException: ",
"at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:661) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.search.FetchSearchPhase$1.onFailure(FetchSearchPhase.java:89) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:28) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:33) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:732) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]",
"at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]",
"at java.lang.Thread.run(Thread.java:831) [?:?]",
"Caused by: org.elasticsearch.common.breaker.CircuitBreakingException: [parent] Data too large, data for [<reduce_aggs>] would be [257661548/245.7mb], which is larger than the limit of [255013683/243.1mb], real usage: [257661456/245.7mb], new bytes reserved: [92/92b], usages [request=275/275b, fielddata=3246/3.1kb, in_flight_requests=0/0b, model_inference=0/0b, eql_sequence=0/0b, accounting=2994100/2.8mb]",
"at org.elasticsearch.indices.breaker.HierarchyCircuitBreakerService.checkParentLimit(HierarchyCircuitBreakerService.java:335) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.common.breaker.ChildMemoryCircuitBreaker.addEstimateBytesAndMaybeBreak(ChildMemoryCircuitBreaker.java:97) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.search.QueryPhaseResultConsumer$PendingMerges.addEstimateAndMaybeBreak(QueryPhaseResultConsumer.java:272) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.search.QueryPhaseResultConsumer.reduce(QueryPhaseResultConsumer.java:129) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.search.FetchSearchPhase.innerRun(FetchSearchPhase.java:98) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.search.FetchSearchPhase.access$000(FetchSearchPhase.java:36) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.search.FetchSearchPhase$1.doRun(FetchSearchPhase.java:84) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) ~[elasticsearch-7.14.0.jar:7.14.0]",
"... 6 more"] }
{"type": "server", "timestamp": "2021-08-30T12:15:19,427Z", "level": "WARN", "component": "o.e.m.j.JvmGcMonitorService", "cluster.name": "docker-cluster", "node.name": "0f862ee523a5", "message": "[gc][108] overhead, spent [1s] collecting in the last [1.1s]", "cluster.uuid": "eLdCbaZnQQassxF5rQzdZA", "node.id": "juZAo53AQM6d_9w4xdfRfA"  }
{"type": "server", "timestamp": "2021-08-30T12:15:20,999Z", "level": "WARN", "component": "o.e.m.j.JvmGcMonitorService", "cluster.name": "docker-cluster", "node.name": "0f862ee523a5", "message": "[gc][109] overhead, spent [1.4s] collecting in the last [1.5s]", "cluster.uuid": "eLdCbaZnQQassxF5rQzdZA", "node.id": "juZAo53AQM6d_9w4xdfRfA"  }
{"type": "server", "timestamp": "2021-08-30T12:15:22,443Z", "level": "WARN", "component": "o.e.m.j.JvmGcMonitorService", "cluster.name": "docker-cluster", "node.name": "0f862ee523a5", "message": "[gc][110] overhead, spent [1.3s] collecting in the last [1.4s]", "cluster.uuid": "eLdCbaZnQQassxF5rQzdZA", "node.id": "juZAo53AQM6d_9w4xdfRfA"  }
{"type": "server", "timestamp": "2021-08-30T12:15:23,795Z", "level": "WARN", "component": "o.e.m.j.JvmGcMonitorService", "cluster.name": "docker-cluster", "node.name": "0f862ee523a5", "message": "[gc][111] overhead, spent [1.2s] collecting in the last [1.3s]", "cluster.uuid": "eLdCbaZnQQassxF5rQzdZA", "node.id": "juZAo53AQM6d_9w4xdfRfA"  }
java.lang.OutOfMemoryError: Java heap space
Dumping heap to data/java_pid7.hprof ...
Unable to create data/java_pid7.hprof: File exists

Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "ticker-schedule-trigger-engine"

Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "elasticsearch[0f862ee523a5][management][T#2]"

Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "elasticsearch[0f862ee523a5][generic][T#25]"

Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "elasticsearch[0f862ee523a5][generic][T#4]"

Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "elasticsearch[0f862ee523a5][system_critical_read][T#1]"

Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "elasticsearch[0f862ee523a5][generic][T#22]"

Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "elasticsearch[0f862ee523a5][write][T#1]"

Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "Connection evictor"

Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "elasticsearch[0f862ee523a5][generic][T#1]"

Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "elasticsearch[watcher-flush-scheduler][T#1]"

Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "elasticsearch[0f862ee523a5][scheduler][T#1]"

Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "elasticsearch[0f862ee523a5][generic][T#26]"
{"type": "server", "timestamp": "2021-08-30T12:20:07,730Z", "level": "INFO", "component": "o.e.i.b.HierarchyCircuitBreakerService", "cluster.name": "docker-cluster", "node.name": "0f862ee523a5", "message": "attempting to trigger G1GC due to high heap usage [259977792]", "cluster.uuid": "eLdCbaZnQQassxF5rQzdZA", "node.id": "juZAo53AQM6d_9w4xdfRfA"  }
{"type": "server", "timestamp": "2021-08-30T12:20:08,633Z", "level": "INFO", "component": "o.e.i.b.HierarchyCircuitBreakerService", "cluster.name": "docker-cluster", "node.name": "0f862ee523a5", "message": "GC did bring memory usage down, before [259977792], after [259493512], allocations [1], duration [903]", "cluster.uuid": "eLdCbaZnQQassxF5rQzdZA", "node.id": "juZAo53AQM6d_9w4xdfRfA"  }
{"type": "server", "timestamp": "2021-08-30T12:20:08,636Z", "level": "WARN", "component": "i.n.c.n.NioEventLoop", "cluster.name": "docker-cluster", "node.name": "0f862ee523a5", "message": "Unexpected exception in the selector loop.", "cluster.uuid": "eLdCbaZnQQassxF5rQzdZA", "node.id": "juZAo53AQM6d_9w4xdfRfA" ,
"stacktrace": ["java.lang.OutOfMemoryError: Java heap space"] }
{"type": "server", "timestamp": "2021-08-30T12:20:12,123Z", "level": "ERROR", "component": "o.e.b.ElasticsearchUncaughtExceptionHandler", "cluster.name": "docker-cluster", "node.name": "0f862ee523a5", "message": "fatal error in thread [elasticsearch[0f862ee523a5][generic][T#12]], exiting", "cluster.uuid": "eLdCbaZnQQassxF5rQzdZA", "node.id": "juZAo53AQM6d_9w4xdfRfA" ,
"stacktrace": ["java.lang.OutOfMemoryError: Java heap space"] }
fatal error in thread [elasticsearch[0f862ee523a5][generic][T#12]], exiting
java.lang.OutOfMemoryError: Java heap space

the entire log does not fit. But everything is repeated there

# bin/elasticsearch --version
Version: 7.14.0, Build: default/docker/dd5a0a2acaa2045ff9624f3729fc8a6f40835aa1/2021-07-29T20:49:32.864135063Z, JVM: 16.0.1

Are you trying to run aggregations that need a lot of memory? Anything with machine learning?

How much memory does this node have?

On server 8GB
I can not answer this question. how to check it and it is possible to disable it?

I was not asking about the server, but how much heap you configured for the Elasticsearch process. Or did you mean that?

What queries are you running on this node when that circuit breaker exception happened? It seems that this aggregation response needs a lot of memory, and some component has triggered this, probably one of the systems querying Elasticsearch.

Thanks for the tip. Changed ES_JAVA_OPTS: "-Xmx256m -Xms256m" to 3g and the error disappeared, though now another has arisen

{"type": "server", "timestamp": "2021-08-30T14:45:31,959Z", "level": "INFO", "component": "o.e.x.s.a.AuthenticationService", "cluster.name": "docker-cluster", "node.name": "0622450159ec", "message": "Authentication of [elastic] was terminated by realm [reserved] - failed to authenticate user [elastic]", "cluster.uuid": "eLdCbaZnQQassxF5rQzdZA", "node.id": "juZAo53AQM6d_9w4xdfRfA"  }

this error has nothing to do with changing the heap, but some component cannot properly authenticate against Elasticsearch.

I just changed the default password. Everything works. Thank you very much for your help.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.