I restarted elasticsearch service several times and each time it initialized the kibana index for a while and then terminated unexpectedly. Is there anything I can do to fix this problem?
Please show your logs, as well as your OS, JVM and Elasticsearch version.
I am using windows server 2012 R2, JVM 8 and Elasticsearch 6. What is the path of the logs to be collected?
The log is a bit large. Can I upload other than image file?
I found that some java_pidxxxx.hprof files are being generated and they quickly fill up the disk space. Anything wrong with that?
They are crashes that are logged to disk.
Just show the last few parts of the log, or use gist/pastebin/etc.
[2018-01-04T03:07:54,846][WARN ][o.e.g.MetaStateService ] [OANlHwb] [[winlogbeat-6.0.0-2017.08.04/81UkYv3TSG2GxAAewfY5tQ]]: failed to write index state
java.io.IOException: There is not enough space on the disk
at sun.nio.ch.FileDispatcherImpl.write0(Native Method) ~[?:?]
at sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:75) ~[?:?]
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) ~[?:?]
at sun.nio.ch.IOUtil.write(IOUtil.java:65) ~[?:?]
at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:211) ~[?:?]
at java.nio.channels.Channels.writeFullyImpl(Channels.java:78) ~[?:1.8.0_151]
at java.nio.channels.Channels.writeFully(Channels.java:101) ~[?:1.8.0_151]
at java.nio.channels.Channels.access$000(Channels.java:61) ~[?:1.8.0_151]
at java.nio.channels.Channels$1.write(Channels.java:174) ~[?:1.8.0_151]
at java.util.zip.CheckedOutputStream.write(CheckedOutputStream.java:73) ~[?:1.8.0_151]
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) ~[?:1.8.0_151]
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) ~[?:1.8.0_151]
at org.apache.lucene.store.OutputStreamIndexOutput.getChecksum(OutputStreamIndexOutput.java:80) ~[lucene-core-7.0.1.jar:7.0.1 8d6c3889aa543954424d8ac1dbb3f03bf207140b - sarowe - 2017-10-02 14:36:35]
at org.apache.lucene.codecs.CodecUtil.writeCRC(CodecUtil.java:548) ~[lucene-core-7.0.1.jar:7.0.1 8d6c3889aa543954424d8ac1dbb3f03bf207140b - sarowe - 2017-10-02 14:36:35]
at org.apache.lucene.codecs.CodecUtil.writeFooter(CodecUtil.java:393) ~[lucene-core-7.0.1.jar:7.0.1 8d6c3889aa543954424d8ac1dbb3f03bf207140b - sarowe - 2017-10-02 14:36:35]
at org.elasticsearch.gateway.MetaDataStateFormat.write(MetaDataStateFormat.java:140) ~[elasticsearch-6.0.0.jar:6.0.0]
at org.elasticsearch.gateway.MetaStateService.writeIndex(MetaStateService.java:125) ~[elasticsearch-6.0.0.jar:6.0.0]
at org.elasticsearch.gateway.GatewayMetaState.applyClusterState(GatewayMetaState.java:180) ~[elasticsearch-6.0.0.jar:6.0.0]
at org.elasticsearch.gateway.Gateway.applyClusterState(Gateway.java:181) ~[elasticsearch-6.0.0.jar:6.0.0]
at org.elasticsearch.cluster.service.ClusterApplierService.lambda$callClusterStateAppliers$6(ClusterApplierService.java:495) ~[elasticsearch-6.0.0.jar:6.0.0]
at java.lang.Iterable.forEach(Iterable.java:75) [?:1.8.0_151]
at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateAppliers(ClusterApplierService.java:492) [elasticsearch-6.0.0.jar:6.0.0]
at org.elasticsearch.cluster.service.ClusterApplierService.applyChanges(ClusterApplierService.java:479) [elasticsearch-6.0.0.jar:6.0.0]
at org.elasticsearch.cluster.service.ClusterApplierService.runTask(ClusterApplierService.java:429) [elasticsearch-6.0.0.jar:6.0.0]
at org.elasticsearch.cluster.service.ClusterApplierService$UpdateTask.run(ClusterApplierService.java:158) [elasticsearch-6.0.0.jar:6.0.0]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-6.0.0.jar:6.0.0]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:247) [elasticsearch-6.0.0.jar:6.0.0]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:210) [elasticsearch-6.0.0.jar:6.0.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_151]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_151]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
Suppressed: java.io.IOException: There is not enough space on the disk
at sun.nio.ch.FileDispatcherImpl.write0(Native Method) ~[?:?]
at sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:75) ~[?:?]
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) ~[?:?]
at sun.nio.ch.IOUtil.write(IOUtil.java:65) ~[?:?]
at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:211) ~[?:?]
at java.nio.channels.Channels.writeFullyImpl(Channels.java:78) ~[?:1.8.0_151]
at java.nio.channels.Channels.writeFully(Channels.java:101) ~[?:1.8.0_151]
at java.nio.channels.Channels.access$000(Channels.java:61) ~[?:1.8.0_151]
at java.nio.channels.Channels$1.write(Channels.java:174) ~[?:1.8.0_151]
at java.util.zip.CheckedOutputStream.write(CheckedOutputStream.java:73) ~[?:1.8.0_151]
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) ~[?:1.8.0_151]
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) ~[?:1.8.0_151]
at org.apache.lucene.store.OutputStreamIndexOutput.close(OutputStreamIndexOutput.java:68) ~[lucene-core-7.0.1.jar:7.0.1 8d6c3889aa543954424d8ac1dbb3f03bf207140b - sarowe - 2017-10-02 14:36:35]
at org.elasticsearch.gateway.MetaDataStateFormat.write(MetaDataStateFormat.java:141) ~[elasticsearch-6.0.0.jar:6.0.0]
at org.elasticsearch.gateway.MetaStateService.writeIndex(MetaStateService.java:125) ~[elasticsearch-6.0.0.jar:6.0.0]
at org.elasticsearch.gateway.GatewayMetaState.applyClusterState(GatewayMetaState.java:180) ~[elasticsearch-6.0.0.jar:6.0.0]
at org.elasticsearch.gateway.Gateway.applyClusterState(Gateway.java:181) ~[elasticsearch-6.0.0.jar:6.0.0]
at org.elasticsearch.cluster.service.ClusterApplierService.lambda$callClusterStateAppliers$6(ClusterApplierService.java:495) ~[elasticsearch-6.0.0.jar:6.0.0]
at java.lang.Iterable.forEach(Iterable.java:75) [?:1.8.0_151]
at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateAppliers(ClusterApplierService.java:492) [elasticsearch-6.0.0.jar:6.0.0]
at org.elasticsearch.cluster.service.ClusterApplierService.applyChanges(ClusterApplierService.java:479) [elasticsearch-6.0.0.jar:6.0.0]
at org.elasticsearch.cluster.service.ClusterApplierService.runTask(ClusterApplierService.java:429) [elasticsearch-6.0.0.jar:6.0.0]
at org.elasticsearch.cluster.service.ClusterApplierService$UpdateTask.run(ClusterApplierService.java:158) [elasticsearch-6.0.0.jar:6.0.0]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-6.0.0.jar:6.0.0]
at org.elasticsearch.common.util.concurrent.Pri
ioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:247) [elasticsearch-6.0.0.jar:6.0.0]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:210) [elasticsearch-6.0.0.jar:6.0.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_151]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_151]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
Suppressed: java.io.IOException: There is not enough space on the disk
at sun.nio.ch.FileDispatcherImpl.write0(Native Method) ~[?:?]
at sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:75) ~[?:?]
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) ~[?:?]
at sun.nio.ch.IOUtil.write(IOUtil.java:65) ~[?:?]
at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:211) ~[?:?]
at java.nio.channels.Channels.writeFullyImpl(Channels.java:78) ~[?:1.8.0_151]
at java.nio.channels.Channels.writeFully(Channels.java:101) ~[?:1.8.0_151]
at java.nio.channels.Channels.access$000(Channels.java:61) ~[?:1.8.0_151]
at java.nio.channels.Channels$1.write(Channels.java:174) ~[?:1.8.0_151]
at java.util.zip.CheckedOutputStream.write(CheckedOutputStream.java:73) ~[?:1.8.0_151]
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) ~[?:1.8.0_151]
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) ~[?:1.8.0_151]
at java.io.FilterOutputStream.close(FilterOutputStream.java:158) ~[?:1.8.0_151]
at org.apache.lucene.store.OutputStreamIndexOutput.close(OutputStreamIndexOutput.java:70) ~[lucene-core-7.0.1.jar:7.0.1 8d6c3889aa543954424d8ac1dbb3f03bf207140b - sarowe - 2017-10-02 14:36:35]
at org.elasticsearch.gateway.MetaDataStateFormat.write(MetaDataStateFormat.java:141) ~[elasticsearch-6.0.0.jar:6.0.0]
at org.elasticsearch.gateway.MetaStateService.writeIndex(MetaStateService.java:125) ~[elasticsearch-6.0.0.jar:6.0.0]
at org.elasticsearch.gateway.GatewayMetaState.applyClusterState(GatewayMetaState.java:180) ~[elasticsearch-6.0.0.jar:6.0.0]
at org.elasticsearch.gateway.Gateway.applyClusterState(Gateway.java:181) ~[elasticsearch-6.0.0.jar:6.0.0]
at org.elasticsearch.cluster.service.ClusterApplierService.lambda$callClusterStateAppliers$6(ClusterApplierService.java:495) ~[elasticsearch-6.0.0.jar:6.0.0]
at java.lang.Iterable.forEach(Iterable.java:75) [?:1.8.0_151]
at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateAppliers(ClusterApplierService.java:492) [elasticsearch-6.0.0.jar:6.0.0]
at org.elasticsearch.cluster.service.ClusterApplierService.applyChanges(ClusterApplierService.java:479) [elasticsearch-6.0.0.jar:6.0.0]
at org.elasticsearch.cluster.service.ClusterApplierService.runTask(ClusterApplierService.java:429) [elasticsearch-6.0.0.jar:6.0.0]
at org.elasticsearch.cluster.service.ClusterApplierService$UpdateTask.run(ClusterApplierService.java:158) [elasticsearch-6.0.0.jar:6.0.0]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-6.0.0.jar:6.0.0]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:247) [elasticsearch-6.0.0.jar:6.0.0]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:210) [elasticsearch-6.0.0.jar:6.0.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_151]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_151]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
[2018-01-04T03:08:06,317][WARN ][o.e.c.a.s.ShardStateAction] [OANlHwb] [winlogbeat-6.0.0-2017.07.12][1] received shard failed for shard id [[winlogbeat-6.0.0-2017.07.12][1]], allocation id [IMs2AC_7RLe6TIHA0vIfrw], primary term [0], message [master {OANlHwb}{OANlHwb4Q0WNQb6XQx1x0w}{LitHRKi2TveqK6m1k-_N4w}{10.99.210.12}{10.99.210.12:9300} has not removed previously failed shard. resending shard failure]
You may want to check this.
If I want to move the directory for data and logs, how can I configure it?
https://www.elastic.co/guide/en/elasticsearch/reference/6.1/modules-node.html#_node_data_path_settings should help.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.