Kibana service not running even after starting the service

0
down vote
favorite
I have installed kibana in my linux machine. When I started the service using "service kibana start" the console shows the message Kibana started but when checking the status, it shows Kibana is not running.

When I checked the error log of kibana (/var/log/kibana/kibana.stderr), it got the below message

events.js:160
throw er; // Unhandled 'error' event
^

Error: EISDIR: illegal operation on a directory, open '/local/mnt/workspace/ELK/KIBANA/LOG'
at Error (native)
events.js:160
throw er; // Unhandled 'error' event
^

Error: EISDIR: illegal operation on a directory, open '/local/mnt/workspace/ELK/KIBANA/LOG'
at Error (native)
events.js:160
throw er; // Unhandled 'error' event
^

Error: EISDIR: illegal operation on a directory, open '/local/mnt/workspace/ELK/KIBANA/LOG'
at Error (native)
events.js:160
throw er; // Unhandled 'error' event
^

Error: EISDIR: illegal operation on a directory, open '/local/mnt/workspace/ELK/KIBANA/LOG'
at Error (native)
Unhandled rejection Error: EACCES: permission denied, open '/local/mnt/workspace/ELK/KIBANA/DATA/uuid'
at Error (native)
Debug: internal, implementation, error
Error: ENOSPC: no space left on device, write
at Error (native)
at Object.fs.writeSync (fs.js:796:20)
at SyncWriteStream._write (fs.js:2244:6)
at doWrite (_stream_writable.js:333:12)
at writeOrBuffer (_stream_writable.js:319:5)
at SyncWriteStream.Writable.write (_stream_writable.js:245:11)
at KbnLoggerJsonFormat.ondata (_stream_readable.js:555:20)
at emitOne (events.js:96:13)
at KbnLoggerJsonFormat.emit (events.js:188:7)
at readableAddChunk (_stream_readable.js:176:18)
at KbnLoggerJsonFormat.Readable.push (_stream_readable.js:134:10)
at KbnLoggerJsonFormat.Transform.push (_stream_transform.js:128:32)
at KbnLoggerJsonFormat._transform (/usr/share/kibana/src/server/logging/log_format.js:74:10)
at KbnLoggerJsonFormat.Transform._read (_stream_transform.js:167:10)
at KbnLoggerJsonFormat.Transform._write (_stream_transform.js:155:12)
at doWrite (_stream_writable.js:333:12)
FATAL { Error: ENOSPC: no space left on device, write
at Error (native)
at Object.fs.writeSync (fs.js:796:20)
at SyncWriteStream._write (fs.js:2244:6)
at doWrite (_stream_writable.js:333:12)
at writeOrBuffer (_stream_writable.js:319:5)
at SyncWriteStream.Writable.write (_stream_writable.js:245:11)
at KbnLoggerJsonFormat.ondata (_stream_readable.js:555:20)
at emitOne (events.js:96:13)
at KbnLoggerJsonFormat.emit (events.js:188:7)
at readableAddChunk (_stream_readable.js:176:18)
at KbnLoggerJsonFormat.Readable.push (_stream_readable.js:134:10)
at KbnLoggerJsonFormat.Transform.push (_stream_transform.js:128:32)
at KbnLoggerJsonFormat._transform (/usr/share/kibana/src/server/logging/log_format.js:74:10)

Kibana service was working perfectly in my machine. For some reasons, i have restarted the service and the above error is preventing kibana from starting the service. Kindly give the direction.

Looks like you are out of disk space.

@warkolm, Thanks for replying to the query.
System memory is getting exhausted soon after starting Logstash and Elasticsearch.

Please find the disk usage report

2K ./lib/elasticsearch/nodes/0/indices/ssVkw4ugSECgU9CRc5hSxg/3/translog
192K ./lib/elasticsearch/nodes/0/indices/ssVkw4ugSECgU9CRc5hSxg/3/index
8.0K ./lib/elasticsearch/nodes/0/indices/ssVkw4ugSECgU9CRc5hSxg/3/_state
216K ./lib/elasticsearch/nodes/0/indices/ssVkw4ugSECgU9CRc5hSxg/3
8.0K ./lib/elasticsearch/nodes/0/indices/ssVkw4ugSECgU9CRc5hSxg/_state
1.2M ./lib/elasticsearch/nodes/0/indices/ssVkw4ugSECgU9CRc5hSxg
12K ./lib/elasticsearch/nodes/0/indices/XbbShegGT7WsZXtmEYphQg/0/translog
220K ./lib/elasticsearch/nodes/0/indices/XbbShegGT7WsZXtmEYphQg/0/index
8.0K ./lib/elasticsearch/nodes/0/indices/XbbShegGT7WsZXtmEYphQg/0/_state
244K ./lib/elasticsearch/nodes/0/indices/XbbShegGT7WsZXtmEYphQg/0
8.0K ./lib/elasticsearch/nodes/0/indices/XbbShegGT7WsZXtmEYphQg/_state
256K ./lib/elasticsearch/nodes/0/indices/XbbShegGT7WsZXtmEYphQg
4.7M ./lib/elasticsearch/nodes/0/indices
12K ./lib/elasticsearch/nodes/0/_state
4.7M ./lib/elasticsearch/nodes/0
4.7M ./lib/elasticsearch/nodes
2.3G ./lib/elasticsearch

This happened, when I started filebeat (running on remote server) and Logstatsh, and logstash continues to poll data. Soon after sometimes, the system memory got exhausted and Elasticsearch stopped running.

Do we need to change any configuration parameters to overcome this issue?

Did you fix the disk space issue though?

@warkolm
No even when the diskspace is cleared and the service is started again, it is ending up in the same issue. Is memory consumption because of continuous polling of data ? I would like to know the reason behind the memory being exhausted.

Please share the Elasticsearch logs.

@warkolm

Sorry, I am not able to share the complete log, here is the last minute logs.

[2018-07-31T13:59:20,798][INFO ][o.e.i.IndexingMemoryController] [YFDx2pw] now throttling indexing for shard [[triageperjiradata-2018.07.15][2]]: segment writing can't keep up
[2018-07-31T13:59:22,474][WARN ][o.e.m.j.JvmGcMonitorService] [YFDx2pw] [gc][190] overhead, spent [1.6s] collecting in the last [1.6s]
[2018-07-31T13:59:23,585][WARN ][o.e.m.j.JvmGcMonitorService] [YFDx2pw] [gc][191] overhead, spent [1s] collecting in the last [1.1s]
[2018-07-31T13:59:25,544][WARN ][o.e.m.j.JvmGcMonitorService] [YFDx2pw] [gc][192] overhead, spent [1.9s] collecting in the last [1.9s]
[2018-07-31T13:59:32,058][INFO ][o.e.i.IndexingMemoryController] [YFDx2pw] now throttling indexing for shard [[triageperjiradata-2018.07.28][0]]: segment writing can't keep up
[2018-07-31T13:59:32,058][INFO ][o.e.i.IndexingMemoryController] [YFDx2pw] now throttling indexing for shard [[triageperjiradata-2018.06.13][4]]: segment writing can't keep up
[2018-07-31T13:59:32,058][INFO ][o.e.i.IndexingMemoryController] [YFDx2pw] now throttling indexing for shard [[triageperjiradata-2018.07.01][4]]: segment writing can't keep up
[2018-07-31T13:59:32,058][INFO ][o.e.i.IndexingMemoryController] [YFDx2pw] now throttling indexing for shard [[triageperjiradata-2018.07.19][0]]: segment writing can't keep up
[2018-07-31T13:59:32,058][INFO ][o.e.i.IndexingMemoryController] [YFDx2pw] now throttling indexing for shard [[triageperjiradata-2018.07.18][3]]: segment writing can't keep up
[2018-07-31T13:59:32,058][INFO ][o.e.i.IndexingMemoryController] [YFDx2pw] now throttling indexing for shard [[triageperjiradata-2018.07.19][3]]: segment writing can't keep up
[2018-07-31T13:59:32,058][WARN ][o.e.m.j.JvmGcMonitorService] [YFDx2pw] [gc][193] overhead, spent [6.4s] collecting in the last [6.5s]
[2018-07-31T13:59:37,619][WARN ][o.e.m.j.JvmGcMonitorService] [YFDx2pw] [gc][194] overhead, spent [5.5s] collecting in the last [5.5s]
[2018-07-31T14:01:00,789][WARN ][o.e.m.j.JvmGcMonitorService] [YFDx2pw] [gc][195] overhead, spent [59.4s] collecting in the last [1m]
[2018-07-31T14:01:06,746][INFO ][o.e.c.m.MetaDataMappingService] [YFDx2pw] [triageperjiradata-2018.07.09/_C50oM1cQ0eDcr2Z2pa1Tw] update_mapping [doc]
[2018-07-31T14:01:06,744][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [] fatal error in thread [elasticsearch[YFDx2pw][bulk][T#5]], exiting
java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.util.packed.PackedLongValues$Builder.(PackedLongValues.java:185) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
at org.apache.lucene.util.packed.DeltaPackedLongValues$Builder.(DeltaPackedLongValues.java:59) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
at org.apache.lucene.util.packed.PackedLongValues.deltaPackedBuilder(PackedLongValues.java:55) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
at org.apache.lucene.util.packed.PackedLongValues.deltaPackedBuilder(PackedLongValues.java:60) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
at org.apache.lucene.index.NormValuesWriter.(NormValuesWriter.java:42) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
at org.apache.lucene.index.DefaultIndexingChain$PerField.setInvertState(DefaultIndexingChain.java:677) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
at org.apache.lucene.index.DefaultIndexingChain$PerField.(DefaultIndexingChain.java:667) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
at org.apache.lucene.index.DefaultIndexingChain.getOrAddField(DefaultIndexingChain.java:605) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
at org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:428) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:392) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:240) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:496) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1729) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1464) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
at org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:1071) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.index.engine.InternalEngine.indexIntoLucene(InternalEngine.java:1013) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:879) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:738) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.index.shard.IndexShard.applyIndexOperation(IndexShard.java:707) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.index.shard.IndexShard.applyIndexOperationOnPrimary(IndexShard.java:673) ~[elasticsearch-6.2.4.jar:6.2.4]

How many shards do you have in your cluster?

@warkolm

Assuming the shard is number of data we are polling, i have 63 shards in a cluster. Please correct if my understanding is wrong.

Check out https://www.elastic.co/guide/en/elasticsearch/reference/6.3/cat-allocation.html, we will need the output from that.

@warkolm,

Thanks,

shards disk.indices disk.used disk.avail disk.total disk.percent host ip node
235 31.3mb 69.5gb 816.2gb 885.8gb 7 10.201.11.62 10.201.11.62 YFDx2pw
386 UNASSIGNED

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.