Kibana crashes Elasticsearch

Hi all,
when Kibana starts I found this error into the Elasticsearch log file,
could anyone explain why?
(with 4gb heap size)
thanks in advance

[2015-09-23 21:41:02,880][INFO ][gateway ] [Luke Cage] recovered [3504] indices into cluster_state
[2015-09-23 21:46:47,101][DEBUG][action.admin.indices.mapping.get] [Luke Cage] failed to execute [org.elasticsearch.action.admin.indices.mapping.get.GetFieldMappingsIndexRequest@69383c05]
java.lang.OutOfMemoryError: Java heap space
at org.elasticsearch.common.util.BigArrays.newByteArray(BigArrays.java:458)
at org.elasticsearch.common.util.BigArrays.newByteArray(BigArrays.java:468)
at org.elasticsearch.common.io.stream.BytesStreamOutput.(BytesStreamOutput.java:60)
at org.elasticsearch.common.io.stream.BytesStreamOutput.(BytesStreamOutput.java:55)
at org.elasticsearch.common.io.stream.BytesStreamOutput.(BytesStreamOutput.java:45)
at org.elasticsearch.common.xcontent.XContentBuilder.builder(XContentBuilder.java:77)
at org.elasticsearch.common.xcontent.json.JsonXContent.contentBuilder(JsonXContent.java:40)
at org.elasticsearch.common.xcontent.XContentFactory.contentBuilder(XContentFactory.java:122)
at org.elasticsearch.action.admin.indices.mapping.get.TransportGetFieldMappingsIndexAction.addFieldMapper(TransportGetFieldMappingsIndexAction.java:237)
at org.elasticsearch.action.admin.indices.mapping.get.TransportGetFieldMappingsIndexAction.findFieldMappingsByType(TransportGetFieldMappingsIndexAction.java:191)
at org.elasticsearch.action.admin.indices.mapping.get.TransportGetFieldMappingsIndexAction.shardOperation(TransportGetFieldMappingsIndexAction.java:119)
at org.elasticsearch.action.admin.indices.mapping.get.TransportGetFieldMappingsIndexAction.shardOperation(TransportGetFieldMappingsIndexAction.java:60)
at org.elasticsearch.action.support.single.custom.TransportSingleCustomOperationAction$AsyncSingleAction$3.run(TransportSingleCustomOperationAction.java:248)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2015-09-23 21:47:20,796][DEBUG][action.admin.indices.mapping.get] [Luke Cage] failed to execute [org.elasticsearch.action.admin.indices.mapping.get.GetFieldMappingsIndexRequest@424b5f36]
java.lang.OutOfMemoryError: Java heap space
at org.elasticsearch.common.jackson.core.util.BufferRecycler.balloc(BufferRecycler.java:155)
at org.elasticsearch.common.jackson.core.util.BufferRecycler.allocByteBuffer(BufferRecycler.java:96)
at org.elasticsearch.common.jackson.core.util.BufferRecycler.allocByteBuffer(BufferRecycler.java:86)
at org.elasticsearch.common.jackson.core.io.IOContext.allocWriteEncodingBuffer(IOContext.java:152)
at org.elasticsearch.common.jackson.core.json.UTF8JsonGenerator.(UTF8JsonGenerator.java:119)
at org.elasticsearch.common.jackson.core.JsonFactory._createUTF8Generator(JsonFactory.java:1284)
at org.elasticsearch.common.jackson.core.JsonFactory.createGenerator(JsonFactory.java:1016)
at org.elasticsearch.common.xcontent.json.JsonXContent.createGenerator(JsonXContent.java:74)
at org.elasticsearch.common.xcontent.json.JsonXContent.createGenerator(JsonXContent.java:80)
at org.elasticsearch.common.xcontent.XContentBuilder.(XContentBuilder.java:109)
at org.elasticsearch.common.xcontent.XContentBuilder.(XContentBuilder.java:99)
at org.elasticsearch.common.xcontent.XContentBuilder.builder(XContentBuilder.java:77)
at org.elasticsearch.common.xcontent.json.JsonXContent.contentBuilder(JsonXContent.java:40)
at org.elasticsearch.common.xcontent.XContentFactory.contentBuilder(XContentFactory.java:122)
at org.elasticsearch.action.admin.indices.mapping.get.TransportGetFieldMappingsIndexAction.addFieldMapper(TransportGetFieldMappingsIndexAction.java:237)
at org.elasticsearch.action.admin.indices.mapping.get.TransportGetFieldMappingsIndexAction.findFieldMappingsByType(TransportGetFieldMappingsIndexAction.java:191)
at org.elasticsearch.action.admin.indices.mapping.get.TransportGetFieldMappingsIndexAction.shardOperation(TransportGetFieldMappingsIndexAction.java:119)
at org.elasticsearch.action.admin.indices.mapping.get.TransportGetFieldMappingsIndexAction.shardOperation(TransportGetFieldMappingsIndexAction.java:60)
at org.elasticsearch.action.support.single.custom.TransportSingleCustomOperationAction$AsyncSingleAction$3.run(TransportSingleCustomOperationAction.java:248)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

How much data do you have in the cluster? How many nodes? How many indexes and shards?

Is a dev machine, so I have:

1Nodes
3,504Total Shards
3,504Successful Shards
3,504Indices
139,113Documents
395.3MBSize

4gb heap

It's the fixed per-shard overhead of the 3500 indexes and shards that's killing your heap. Do you really need that many indexes? How is the data structured? Is it time-series data?

no, is not a time-series data, it's about a log, each index is related to a specific log generated by a malware. so I would to separate it ... is there any other way to organize them?
what's the best practice to configure ES/Kibana for my needs ?

You could segregate your logs via the type or any other document field. Is there any particular reason why you feel you need to segregate via the index? One such reason could be if you need different mappings for fields of the same name (i.e. the log for malware A requires a field B of type long but malware C requires field B to be a string) but that sounds far-fetched.

actually not, I thought should be easier delete a specific index
I don't know if I can delete just a specific type
moreover with Kibana, can I distinguish the type when I make a query ?

last doubt, If I put all documents in the same Index, how can I be sure the error doesn't happen again ?
thanks a lot

actually not, I thought should be easier delete a specific index
I don't know if I can delete just a specific type

That's a fair point; deleting an index is very cheap, but it's totally possible to delete all documents of a type using a delete-by-query operation (which is moved into a plugin in ES 2.0). In this case I don't think you have much choice as 3500 shards on a single node with 4 GB heap just won't work.

moreover with Kibana, can I distinguish the type when I make a query ?

Sure, just add type:name-of-type to the query. The type is just another field.

last doubt, If I put all documents in the same Index, how can I be sure the error doesn't happen again ?

The cause of your issues was the large number of shards and the fixed heap overhead that you have to pay. With a single index the overhead is basically zero and your main worry would be for the number of documents and their size, and a 4 GB heap should scale to millions of kilobyte-sized documents.

1 Like