Exiting java.lang.OutOfMemoryError: Java heap space

[2019-02-04T12:33:54,521][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] fatal error in thread [elasticsearch[8mTx-ra][search][T#7]], exiting
java.lang.OutOfMemoryError: Java heap space

Please help me solve this issue..
elasticsearch-5.6.14

What i try to do ..

  1. On sysconfig: /etc/sysconfig/elasticsearch you should have:
    ES_JAVA_OPTS="-Xms3g -Xmx3g"
    MAX_LOCKED_MEMORY=unlimited
  2. /etc/security/limits.conf
    On security limits config: /etc/security/limits.conf you should have
    elasticsearch soft memlock unlimited
    elasticsearch hard memlock unlimited
  3. On the service script: /usr/lib/systemd/system/elasticsearch.service you should uncomment:
    LimitMEMLOCK=infinity
    you should do systemctl daemon-reload after changing the service script
  4. /etc/elasticsearch/elasticsearch.yml
    bootstrap.memory_lock: true

Zero result.

What is the output of the cluster health and cluster stats APIs?

curl -X GET "localhost:9200/_cluster/health"
{"cluster_name":"graylog","status":"green","timed_out":false,"number_of_nodes":1,"number_of_data_nodes":1,"active_primary_shards":24,"active_shards":24,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":100.0}

{
"_nodes" : {
"total" : 1,
"successful" : 1,
"failed" : 0
},
"cluster_name" : "graylog",
"timestamp" : 1549278763118,
"status" : "green",
"indices" : {
"count" : 6,
"shards" : {
"total" : 24,
"primaries" : 24,
"replication" : 0.0,
"index" : {
"shards" : {
"min" : 4,
"max" : 4,
"avg" : 4.0
},
"primaries" : {
"min" : 4,
"max" : 4,
"avg" : 4.0
},
"replication" : {
"min" : 0.0,
"max" : 0.0,
"avg" : 0.0
}
}
},
"docs" : {
"count" : 105302466,
"deleted" : 0
},
"store" : {
"size" : "36.7gb",
"size_in_bytes" : 39417555657,
"throttle_time" : "0s",
"throttle_time_in_millis" : 0
},
"fielddata" : {
"memory_size" : "0b",
"memory_size_in_bytes" : 0,
"evictions" : 0
},
"query_cache" : {
"memory_size" : "7.7mb",
"memory_size_in_bytes" : 8142541,
"total_count" : 5240,
"hit_count" : 2596,
"miss_count" : 2644,
"cache_size" : 79,
"cache_count" : 79,
"evictions" : 0
},
"completion" : {
"size" : "0b",
"size_in_bytes" : 0
},
"segments" : {
"count" : 87,
"memory" : "72.1mb",
"memory_in_bytes" : 75602569,
"terms_memory" : "53.8mb",
"terms_memory_in_bytes" : 56444921,
"stored_fields_memory" : "12.7mb",
"stored_fields_memory_in_bytes" : 13319136,
"term_vectors_memory" : "0b",
"term_vectors_memory_in_bytes" : 0,
"norms_memory" : "16.3kb",
"norms_memory_in_bytes" : 16704,
"points_memory" : "4.7mb",
"points_memory_in_bytes" : 5003684,
"doc_values_memory" : "798.9kb",
"doc_values_memory_in_bytes" : 818124,
"index_writer_memory" : "7.4mb",
"index_writer_memory_in_bytes" : 7858796,
"version_map_memory" : "12.2kb",
"version_map_memory_in_bytes" : 12584,
"fixed_bit_set" : "0b",
"fixed_bit_set_memory_in_bytes" : 0,
"max_unsafe_auto_id_timestamp" : -1,
"file_sizes" : { }
}
},
"nodes" : {
"count" : {
"total" : 1,
"data" : 1,
"coordinating_only" : 0,
"master" : 1,
"ingest" : 1
},
"versions" : [
"5.6.14"
],
"os" : {
"available_processors" : 8,
"allocated_processors" : 8,
"names" : [
{
"name" : "Linux",
"count" : 1
}
],
"mem" : {
"total" : "15.5gb",
"total_in_bytes" : 16646242304,
"free" : "606.5mb",
"free_in_bytes" : 636063744,
"used" : "14.9gb",
"used_in_bytes" : 16010178560,
"free_percent" : 4,
"used_percent" : 96
}
},
"process" : {
"cpu" : {
"percent" : 1
},
"open_file_descriptors" : {
"min" : 320,
"max" : 320,
"avg" : 320
}
},
"jvm" : {
"max_uptime" : "1.6h",
"max_uptime_in_millis" : 5862002,
"versions" : [
{
"version" : "1.8.0_191",
"vm_name" : "OpenJDK 64-Bit Server VM",
"vm_version" : "25.191-b12",
"vm_vendor" : "Oracle Corporation",
"count" : 1
}
],
"mem" : {
"heap_used" : "1.3gb",
"heap_used_in_bytes" : 1443017576,
"heap_max" : "2.9gb",
"heap_max_in_bytes" : 3151495168
},
"threads" : 62
},
"fs" : {
"total" : "397gb",
"total_in_bytes" : 426350342144,
"free" : "343.1gb",
"free_in_bytes" : 368414822400,
"available" : "343.1gb",
"available_in_bytes" : 368414822400,
"spins" : "true"
},
"plugins" : ,
"network_types" : {
"transport_types" : {
"netty4" : 1
},
"http_types" : {
"netty4" : 1
}
}
}
}

That looks fine. Wonder if indexing and querying require more heap. Have you tried increasing it beyond 3GB?

Of course, i tried set heap size 7G but is do not have any result.. cant understand why elastic service down.

Is there anything in the logs?

[2019-02-04T12:26:51,415][INFO ][o.e.n.Node               ] initialized
[2019-02-04T12:26:51,415][INFO ][o.e.n.Node               ] [8mTx-ra] starting ...
[2019-02-04T12:26:51,562][INFO ][o.e.t.TransportService   ] [8mTx-ra] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
[2019-02-04T12:26:54,607][INFO ][o.e.c.s.ClusterService   ] [8mTx-ra] new_master {8mTx-ra}{8mTx-raJQ0yYHnDuKXiFzg}{bZtINvTZSPOWlAaiNniWXg}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2019-02-04T12:26:54,619][INFO ][o.e.h.n.Netty4HttpServerTransport] [8mTx-ra] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
[2019-02-04T12:26:54,620][INFO ][o.e.n.Node               ] [8mTx-ra] started
[2019-02-04T12:26:54,864][INFO ][o.e.g.GatewayService     ] [8mTx-ra] recovered [6] indices into cluster_state
[2019-02-04T12:26:55,919][INFO ][o.e.c.r.a.AllocationService] [8mTx-ra] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[graylog_0][2], [graylog_0][0]] ...]).
[2019-02-04T12:33:33,974][WARN ][o.e.m.j.JvmGcMonitorService] [8mTx-ra] [gc][402] overhead, spent [760ms] collecting in the last [1.4s]
[2019-02-04T12:33:34,974][WARN ][o.e.m.j.JvmGcMonitorService] [8mTx-ra] [gc][403] overhead, spent [718ms] collecting in the last [1s]
[2019-02-04T12:33:36,081][WARN ][o.e.m.j.JvmGcMonitorService] [8mTx-ra] [gc][404] overhead, spent [818ms] collecting in the last [1.1s]
[2019-02-04T12:33:37,155][WARN ][o.e.m.j.JvmGcMonitorService] [8mTx-ra] [gc][405] overhead, spent [721ms] collecting in the last [1s]
[2019-02-04T12:33:38,845][WARN ][o.e.m.j.JvmGcMonitorService] [8mTx-ra] [gc][406] overhead, spent [1.6s] collecting in the last [1.6s]
[2019-02-04T12:33:40,028][WARN ][o.e.m.j.JvmGcMonitorService] [8mTx-ra] [gc][407] overhead, spent [911ms] collecting in the last [1.1s]
[2019-02-04T12:33:42,552][WARN ][o.e.m.j.JvmGcMonitorService] [8mTx-ra] [gc][408] overhead, spent [2.2s] collecting in the last [2.5s]
[2019-02-04T12:33:45,932][WARN ][o.e.m.j.JvmGcMonitorService] [8mTx-ra] [gc][409] overhead, spent [3.3s] collecting in the last [3.3s]
[2019-02-04T12:33:49,945][WARN ][o.e.m.j.JvmGcMonitorService] [8mTx-ra] [gc][410] overhead, spent [4s] collecting in the last [4s]
[2019-02-04T12:33:53,016][WARN ][o.e.m.j.JvmGcMonitorService] [8mTx-ra] [gc][411] overhead, spent [3s] collecting in the last [3s]
[2019-02-04T12:33:54,520][WARN ][o.e.m.j.JvmGcMonitorService] [8mTx-ra] [gc][412] overhead, spent [1.4s] collecting in the last [1.5s]
[2019-02-04T12:33:54,521][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [] fatal error in thread [elasticsearch[8mTx-ra][search][T#7]], exiting
java.lang.OutOfMemoryError: Java heap space
[2019-02-04T12:33:54,521][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [] fatal error in thread [elasticsearch[8mTx-ra][search][T#5]], exiting
java.lang.OutOfMemoryError: Java heap space
[2019-02-04T12:35:02,962][INFO ][o.e.n.Node               ] [] initializing ...

What does your configuration look like? Do you have any non-default settings that could be causing problems?

Configuration of what?
No, I do not think so.
Is it so hard? to make the elastic not fall with a silly mistake? Why can't he refuse the request? Why is the service crashing?

curl -X GET "localhost:9200/_nodes?filter_path=**.mlockall"
{"nodes":{"8mTx-raJQ0yYHnDuKXiFzg":{"process":{"mlockall":true}}}}

Yes, it is in fact really rather hard to do this. We'll try and help, but you will have to help too. I've never seen a node go out of memory within 7 minutes of startup before, so there's definitely something unusual with your configuration or usage pattern. Perhaps you have found a bug. It'd be good to find out.

When your node went down with an OutOfMemoryError it will (by default) write a heap dump. It would be very useful if you could share this.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.