OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x000000058a6c0000, 10388504576, 0) failed; error='Cannot allocate memory' (errno=12)

OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x000000058a6c0000, 10388504576, 0) failed; error='Cannot allocate memory' (errno=12)

There is insufficient memory for the Java Runtime Environment to continue.

Native memory allocation (mmap) failed to map 10388504576 bytes for committing reserved memory.

An error report file with more information is saved as:

/tmp/jvm-22765/hs_error.log

memory info:strong text
cat /proc/meminfo
MemTotal: 15952668 kB
MemFree: 482684 kB
MemAvailable: 4307768 kB
Buffers: 212132 kB
Cached: 3255512 kB
SwapCached: 0 kB
Active: 1059224 kB
Inactive: 2351084 kB
Active(anon): 66644 kB
Inactive(anon): 56 kB
Active(file): 992580 kB
Inactive(file): 2351028 kB
Unevictable: 11128860 kB
Mlocked: 11128860 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 32128 kB
Writeback: 96 kB
AnonPages: 11071548 kB
Mapped: 154188 kB

free -m
total used free shared buffers cached
Mem: 15578 15118 459 0 207 3189
-/+ buffers/cache: 11722 3856
Swap: 0 0 0

for me kibana is not loading and I am getting below error:

I have check logstash,redis. all is up and running only facing issue over elasticsearch service which I posted above.

pasting below logstash log:
{:timestamp=>"2022-10-27T05:09:38.659000+0000", :message=>"Got error to send bulk of actions: Connection refused (Connection refused)", :level=>:error}
{:timestamp=>"2022-10-27T05:09:38.659000+0000", :message=>"Failed to flush outgoing items", :outgoing_count=>1, :exception=>"Manticore::SocketException", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.4.4-java/lib/manticore/response.rb:35:in initialize'", "org/jruby/RubyProc.java:281:in call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.4.4-java/lib/manticore/response.rb:70:in call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.4.4-java/lib/manticore/response.rb:245:in call_once'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.4.4-java/lib/manticore/response.rb:148:in code'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/http/manticore.rb:71:in perform_request'", "org/jruby/RubyProc.java:281:in call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/base.rb:201:in perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/http/manticore.rb:54:in perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/client.rb:125:in perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.15/lib/elasticsearch/api/actions/bulk.rb:87:in bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.1.0-java/lib/logstash/outputs/elasticsearch/protocol.rb:105:in bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.1.0-java/lib/logstash/outputs/elasticsearch.rb:548:in submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.1.0-java/lib/logstash/outputs/elasticsearch.rb:547:in submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.1.0-java/lib/logstash/outputs/elasticsearch.rb:572:in flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.1.0-java/lib/logstash/outputs/elasticsearch.rb:571:in flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.21/lib/stud/buffer.rb:219:in buffer_flush'", "org/jruby/RubyHash.java:1342:in each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.21/lib/stud/buffer.rb:216:in buffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.21/lib/stud/buffer.rb:159:in buffer_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.1.0-java/lib/logstash/outputs/elasticsearch.rb:537:in receive'", "/opt/logstash/vendor/jruby/lib/ruby/1.9/forwardable.rb:201:in receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.6-java/lib/logstash/outputs/base.rb:88:in handle'", "(eval):284:in output_func'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.6-java/lib/logstash/pipeline.rb:244:in outputworker'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.6-java/lib/logstash/pipeline.rb:166:in start_outputs'"], :level=>:warn}
{:timestamp=>"2022-10-27T05:09:39.093000+0000", :message=>["INFLIGHT_EVENTS_REPORT", "2022-10-27T05:09:39Z", {"input_to_filter"=>20, "filter_to_output"=>20, "outputs"=>}], :level=>:warn}

please guide me to resolve the issue.

Just posting an error message is not very useful. You need to provide context and additional details.

Which version of Elasticsearch are you using?

What is your heap size set to?

How much RAM does the host have available?

Is there anything else running on the host?

Welcome to our community! :smiley:

Please don't just post unformatted error messages with no other information. It's impossible for us to help with what you have provided. We need to see logs, configs and context on what you are doing.

1 Like

I have updated the description please have a look let me know the resolution steps

You did not answer my questions. Please take the time to answer them properly instead just posting images of text and output of commands.

It looks like you have 15GB of RAM on the host and are running a number of other services there as well. This means that you need to reduce the heap for Elasticsearch from 10GB. Elasticsearch does not only use heap but also requires off-heap storage. The recommendation is to set the heap size to no more than 50% of the RAM available to Elasticsearch. Have a look at how much RAM is left after you have started all other services and set the Elasticsearch to below 50% of that.

Note that it is recommended to run Elasticsearch on a dedicated host without other services, at least not for a production environment.

Is there anything else running on the host?
no nothing else is running.

could you just let me know when does this error occur?
what are the possible resolution step?
And how to deal with jvm issue over elasticsearch?
can we delete or clear anything to get back the memory.

I have not just posted text or sceenshot. I have posted details of the host and error messages.

Which version of Elasticsearch are you using?

What kind of hardware is the node running on?

As it looks like you have 15GB RAM available, reduce the Elasticsearch heap size to 7GB and try restarting. Please provide the Elasticsearch logs if you run into any issues.

Which version of Elasticsearch are you using?
1.6.0

curl -XGET 'http://localhost:9200'
{
"status" : 200,
"name" : "elmaster",
"cluster_name" : "*",
"version" : {
"number" : "1.6.0",
"build_hash" : "cdd3ac4dde4f69524ec0a14de3828cb95bbb86d0",
"build_timestamp" : "2015-06-09T13:36:34Z",
"build_snapshot" : false,
"lucene_version" : "4.10.4"
},
"tagline" : "You Know, for Search"
}

What kind of hardware is the node running on?
Platform:
Amazon linux

Platform details:
Linux/UNIX
Hope I have answered this question as expected if not please let me know where I went wrong in giving the details. as I am not hardware expert.

Should I proceed to reduce the Elasticsearch heap size to 7GB?

Elasticsearch 1.6 is extremely old (released June 2015) and have been EOL for a long, long time. I have not worked with this for many years, so would recommend you upgrade ASAP. If reducing heap size (or switching to an insance with more RAM) does not help I am not sure I have any other suggestions.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.