I've set up ELK-Stack for Monitoring of our Firewall logging. It works
fine, we have two dashboards rotating on a screen and are very happy with
it. Unfortunately it crashed now already for the second time with the same
error. This won't be that dramatic, since it works after a restart of the
services - but the critical thing about it is, that the saved dashboards
disappear also.
Since I can't figure out where the dashboards are saved, I can't even do a
backup of them.
Is it even possible to backup the dashboard only - and where is it stored?
Regarding the error:
[2015-05-02 02:00:09,800][INFO ][cluster.metadata ] [abc] [logstash-2015.05.02] creating index, cause [auto(bulk api)], shards [5]/[1], mappings [default]
[2015-05-02 02:00:29,890][INFO ][cluster.metadata ] [abc] [logstash-2015.05.02] update_mapping [syslog] (dynamic)
[2015-05-02 02:00:31,581][INFO ][cluster.metadata ] [abc] [logstash-2015.05.02] update_mapping [syslog] (dynamic)
[2015-05-02 03:01:03,038][WARN ][index.engine.internal ] [abc] [logstash-2015.05.01][4] failed engine [out of memory]
[2015-05-02 03:01:04,085][DEBUG][action.search.type ] [abc] [2183065] Failed to execute fetch phase
org.elasticsearch.ElasticsearchException: Java heap space
at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:40)
at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:467)
at org.elasticsearch.search.action.SearchServiceTransportAction$17.call(SearchServiceTransportAction.java:410)
at org.elasticsearch.search.action.SearchServiceTransportAction$17.call(SearchServiceTransportAction.java:407)
at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:517)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.codecs.compressing.CompressionMode$4.decompress(CompressionMode.java:133)
at org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:347)
at org.apache.lucene.index.SegmentReader.document(SegmentReader.java:288)
at org.apache.lucene.index.BaseCompositeReader.document(BaseCompositeReader.java:110)
at org.apache.lucene.search.IndexSearcher.doc(IndexSearcher.java:196)
at org.elasticsearch.search.fetch.FetchPhase.loadStoredFields(FetchPhase.java:228)
at org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:156)
at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:455)
... 6 more
[2015-05-02 03:01:35,204][WARN ][index.engine.internal ] [abc] [logstash-2015.05.02][2] failed engine [out of memory]
[2015-05-02 03:01:07,133][WARN ][index.engine.internal ] [abc] [logstash-2015.05.01][4] failed to flush after setting shard to inactive
org.elasticsearch.index.engine.FlushFailedEngineException: [logstash-2015.05.01][4] Flush failed
at org.elasticsearch.index.engine.internal.InternalEngine.flush(InternalEngine.java:781)
at org.elasticsearch.index.engine.internal.InternalEngine.updateIndexingBufferSize(InternalEngine.java:233)
at org.elasticsearch.indices.memory.IndexingMemoryController$ShardsIndicesStatusChecker.run(IndexingMemoryController.java:201)
at org.elasticsearch.threadpool.ThreadPool$LoggingRunnable.run(ThreadPool.java:454)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Java heap space
I've set up ELK-Stack for Monitoring of our Firewall logging. It works
fine, we have two dashboards rotating on a screen and are very happy with
it. Unfortunately it crashed now already for the second time with the same
error. This won't be that dramatic, since it works after a restart of the
services - but the critical thing about it is, that the saved dashboards
disappear also.
Since I can't figure out where the dashboards are saved, I can't even do a
backup of them.
Is it even possible to backup the dashboard only - and where is it stored?
Regarding the error:
[2015-05-02 02:00:09,800][INFO ][cluster.metadata ] [abc] [logstash-2015.05.02] creating index, cause [auto(bulk api)], shards [5]/[1], mappings [default]
[2015-05-02 02:00:29,890][INFO ][cluster.metadata ] [abc] [logstash-2015.05.02] update_mapping [syslog] (dynamic)
[2015-05-02 02:00:31,581][INFO ][cluster.metadata ] [abc] [logstash-2015.05.02] update_mapping [syslog] (dynamic)
[2015-05-02 03:01:03,038][WARN ][index.engine.internal ] [abc] [logstash-2015.05.01][4] failed engine [out of memory]
[2015-05-02 03:01:04,085][DEBUG][action.search.type ] [abc] [2183065] Failed to execute fetch phase
org.elasticsearch.ElasticsearchException: Java heap space
at org.elasticsearch.ExceptionsHelper.convertToRuntime(ExceptionsHelper.java:40)
at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:467)
at org.elasticsearch.search.action.SearchServiceTransportAction$17.call(SearchServiceTransportAction.java:410)
at org.elasticsearch.search.action.SearchServiceTransportAction$17.call(SearchServiceTransportAction.java:407)
at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:517)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.codecs.compressing.CompressionMode$4.decompress(CompressionMode.java:133)
at org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:347)
at org.apache.lucene.index.SegmentReader.document(SegmentReader.java:288)
at org.apache.lucene.index.BaseCompositeReader.document(BaseCompositeReader.java:110)
at org.apache.lucene.search.IndexSearcher.doc(IndexSearcher.java:196)
at org.elasticsearch.search.fetch.FetchPhase.loadStoredFields(FetchPhase.java:228)
at org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:156)
at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:455)
... 6 more
[2015-05-02 03:01:35,204][WARN ][index.engine.internal ] [abc] [logstash-2015.05.02][2] failed engine [out of memory]
[2015-05-02 03:01:07,133][WARN ][index.engine.internal ] [abc] [logstash-2015.05.01][4] failed to flush after setting shard to inactive
org.elasticsearch.index.engine.FlushFailedEngineException: [logstash-2015.05.01][4] Flush failed
at org.elasticsearch.index.engine.internal.InternalEngine.flush(InternalEngine.java:781)
at org.elasticsearch.index.engine.internal.InternalEngine.updateIndexingBufferSize(InternalEngine.java:233)
at org.elasticsearch.indices.memory.IndexingMemoryController$ShardsIndicesStatusChecker.run(IndexingMemoryController.java:201)
at org.elasticsearch.threadpool.ThreadPool$LoggingRunnable.run(ThreadPool.java:454)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Java heap space
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.