ES hangs when deleting objects

Hi,

When i try to delete a dashboard, or a visualization ES crashed.
It shows me (after several minutes) the status page. head size 1.09 and used 988 MB

I search logs from today, but it shows me old logs, starting yesterday.

How may i fix this? Is it about HEAP SIZE?
I don't knwo if it deals with that, but i get an error on my dashboards. Courier Fetch: x of X shards failed.

Thanks

How, via KB?

Also, check your ES logs, there should be something in them.

Hi warkolm,

Yes, via Kibana.
Looking at logs i found this:

[2016-07-12 08:39:15,775][DEBUG][action.search.type       ] [xxx] [logstash-2016.04.14][0], node[AfgOio-1S5-wj942o9pYmQ], [P], v[22], s[STARTED], a[id=PBHVpMHESBKV3Y4xZ8PHYQ]: Failed to
 execute [org.elasticsearch.action.search.SearchRequest@3d805b40] lastShard [true]
RemoteTransportException[[xxx][localhost/127.0.0.1:9300][indices:data/read/search[phase/query]]]; nested: EsRejectedExecutionException[rejected execution of org.elasticsearch.transport.
TransportService$4@2f3cc2cf on EsThreadPoolExecutor[search, queue capacity = 3000, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@7d5a6e26[Running, pool size = 7, active thre
ads = 7, queued tasks = 3000, completed tasks = 57201]]];
Caused by: EsRejectedExecutionException[rejected execution of org.elasticsearch.transport.TransportService$4@2f3cc2cf on EsThreadPoolExecutor[search, queue capacity = 3000, org.elasticsearc
h.common.util.concurrent.EsThreadPoolExecutor@7d5a6e26[Running, pool size = 7, active threads = 7, queued tasks = 3000, completed tasks = 57201]]]
.............

What i may think is that the pool size is full? is that ok? The queue size is full, now i changed it to 5000.
But the error is still appearing.Should i change it to a higher number?

I think that doesn't deals with my problem, is another problem.

When it hangs deleting i get this error:

[2016-07-12 17:16:03,830][WARN ][netty.channel.socket.nio.AbstractNioSelector] Unexpected exception in the selector loop.
java.lang.OutOfMemoryError: Java heap space
[2016-07-12 17:16:03,830][ERROR][rest.action.support      ] failed to send failure response
java.lang.OutOfMemoryError: Java heap space
[2016-07-12 17:16:00,211][WARN ][netty.channel.socket.nio.AbstractNioSelector] Unexpected exception in the selector loop.
java.lang.OutOfMemoryError: Java heap space
[2016-07-12 17:14:38,855][ERROR][rest.action.support      ] failed to send failure response
java.lang.OutOfMemoryError: Java heap space
        at com.fasterxml.jackson.core.util.BufferRecycler.calloc(BufferRecycler.java:156)
        at com.fasterxml.jackson.core.util.BufferRecycler.allocCharBuffer(BufferRecycler.java:124)
        at com.fasterxml.jackson.core.util.BufferRecycler.allocCharBuffer(BufferRecycler.java:114)
        at com.fasterxml.jackson.core.io.IOContext.allocConcatBuffer(IOContext.java:186)
        at com.fasterxml.jackson.core.json.UTF8JsonGenerator.<init>(UTF8JsonGenerator.java:127)
        at com.fasterxml.jackson.core.JsonFactory._createUTF8Generator(JsonFactory.java:1284)
        at com.fasterxml.jackson.core.JsonFactory.createGenerator(JsonFactory.java:1016)
        at org.elasticsearch.common.xcontent.json.JsonXContent.createGenerator(JsonXContent.java:74)
        at org.elasticsearch.common.xcontent.json.JsonXContent.createGenerator(JsonXContent.java:80)
        at org.elasticsearch.common.xcontent.XContentBuilder.<init>(XContentBuilder.java:112)
        at org.elasticsearch.rest.RestChannel.newBuilder(RestChannel.java:69)
        at org.elasticsearch.rest.RestChannel.newErrorBuilder(RestChannel.java:52)
        at org.elasticsearch.rest.BytesRestResponse.convert(BytesRestResponse.java:123)
        at org.elasticsearch.rest.BytesRestResponse.<init>(BytesRestResponse.java:96)
        at org.elasticsearch.rest.BytesRestResponse.<init>(BytesRestResponse.java:87)
        at org.elasticsearch.rest.action.support.RestActionListener.onFailure(RestActionListener.java:60)
        at org.elasticsearch.rest.action.support.RestActionListener.onResponse(RestActionListener.java:51)
        at org.elasticsearch.action.search.type.TransportSearchQueryAndFetchAction$AsyncAction$1.doRun(TransportSearchQueryAndFetchAction.java:90)
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)

Heap memory issue.
What should i do?

Thanks

This all means your nodes are overloaded.

Hi warkolm,

I think that is a java_heap issue, isn't it?
So, if i change the heap size, can i fix this?

It might, but you should look at other things like adding more nodes.

I'm trying to change HEAP_SIZE but with no success.
I've created an environment var ES_HEAPSIZE=4g
When i restart elasticsearch i see that the process is started -Xms256m -Xmx1g

How can i change the heap size (in linux) in a right way? how can i check the current heap_size?

Thanks

How did you install ES?

I installed the .deb file

So use the /etc/default/elasticsearch file :slight_smile:

Thank you, that's worked