Is deleting the whole ELK stack and reinstalling on a same device a good practice?

Hi,

I am new to ELK stack and I have recently installed it on my MAC to test out its features and learn it. I was able to successfully transfer logs through Filebeat and direct them to logstash, then logstash could push it to Elasticsearch. I was able to view them on Kibana UI just as desired. I have created few index patterns as well and started viewing Elasticsearch documents on Kibana for each log. I also installed X-pack going further to use the Watcher feature to send alerts on occurrence of exceptions. After this, my Kibana is unable to communicate with Elasticsearch and the UI of Kibana doesn't show anything but an error message like this:

Config: Error 503 Service Unavailable: Automatic index creation failed

I have tried several ways to troubleshoot it but in vain.

When I try entering:

curl -X GET "localhost:9200/_cat/indices?v"

This is the output:

health status index                             uuid                   pri rep docs.count docs.deleted store.size pri.store.size

green open .watcher-history-7-2018.06.06 hcDPUoOHRfmYfN7H5DJiXw 1 0 5990 0 6.9mb 6.9mb
yellow open index_name XLKdcTuOTCOGDHlB2JwZCA 5 1 0 0 1.2kb 1.2kb
green open .monitoring-es-6-2018.06.06 yf9JRFRvR_yIw9I0D1pnQQ 1 0 55051 34 23.9mb 23.9mb
green open .monitoring-kibana-6-2018.06.06 -yDDCo-sTv-J5Ie0Uks6eg 1 0 4457 0 1.3mb 1.3mb
green open .monitoring-alerts-6 Jlo1ukQHRmKDTsXljoETFg 1 0 11 1 25.8kb 25.8kb
green open .monitoring-kibana-6-2018.06.07 mk_VkpE9RzuXgPqXxuNMSA 1 0 1345 0 432.2kb 432.2kb
green open .monitoring-kibana-6-2018.06.08 fNPwW33cQeeKPELgX5eRWg 1 0 2531 0 1.2mb 1.2mb
yellow open test PVlcvzk8ReWihzxifKomlA 5 1 0 0 1.2kb 1.2kb
green open .watcher-history-7-2018.06.08 pt8BfFoyR_ye5lraxGxocA 1 0 3376 0 6.2mb 6.2mb
green open .watches Wl2DXlWIQbmJyDTPgJBr2w 1 0 7 0 93kb 93kb
green open .triggered_watches WiCkzDdrQiaRG4PJgr0iZA 1 0 0 0 236.2kb 236.2kb
yellow open elktest Opc7j0yrRxiDKbWXR61Uwg 5 1 0 0 1.2kb 1.2kb
green open .monitoring-es-6-2018.06.07 inlyn_kmTFyFzSOuAcVd_A 1 0 23616 60 10.3mb 10.3mb
green open .monitoring-logstash-6-2018.06.06 E8Ywmvp-QmumPlQX9cSzkg 1 0 21114 0 2mb 2mb
green open .watcher-history-7-2018.06.07 V38qxWhiQA2VVeRCNiLr4A 1 0 2088 0 2.4mb 2.4mb
green open .monitoring-es-6-2018.06.08 bp_YDxGgSKOlSYxCEH04XA 1 0 49066 129 35mb 35mb

This is my elasticsearch.yml configuration:

cluster.name: ELK-TestCluster1
node.name: node-0
network.host: localhost
http.port: 9200

---------------------------------- X-Pack Settings ----------------------------

#xpack.monitoring.elasticsearch.url: http://localhost:9200
bootstrap.system_call_filter: false
xpack.security.enabled: false
xpack.monitoring.enabled: true
xpack.graph.enabled: true
xpack.watcher.enabled: true
xpack.security.http.ssl.enabled: false
action.auto_create_index: .security,.monitoring*,.watches,.triggered_watches,.watcher-history*,.ml*

This is my kibana.yml configuration:

server.port: 5601
elasticsearch.url: "http://localhost:9200"
elasticsearch.username: elastic
elasticsearch.password: changeme
logging.verbose: true

This is my logstash.conf file:

input { beats {
port => 5044 }}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "test-%{yyyy.mm.dd}"
user => elastic
password => changeme
}
stdout { codec => rubydebug }
}

I'm confused as to what needs to be done here to unblock myself. Please suggest some solution to this or let me know if I can delete the whole stack and reinstall it to clear any blocks here.

I think you will need to allow .kibana in . there as well.

Thanks, Mark. I edited my elasticsearch.yml file by updating .kibana in it. But the issue still persists. When I run filebeat, it now shows an error message too. See below

2018-06-08T16:11:11.579-0700 WARN elasticsearch/client.go:502 Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xbebee12361089db8, ext:132081374375, loc:(*time.Location)(0x534e0c0)}, Meta:common.MapStr(nil), Fields:common.MapStr{"source":"/Users/XXXX/Downloads/boot-folder/LOG_FOLDER/servicelogs.log", "offset":14101, "message":"DEBUG : message", "prospector":common.MapStr{"type":"log"}, "beat":common.MapStr{"name":"XXXXXX", "hostname":"XXXXXX", "version":"6.2.4"}}, Private:file.State{Id:"", Finished:false, Fileinfo:(*os.fileStat)(0xc4204b04e0), Source:"/Users/XXXX/Downloads/boot-folder/LOG_FOLDER/servicelogs.log", Offset:14101, Timestamp:time.Time{wall:0xbebee104dd6f7e6d, ext:10016364708, loc:(*time.Location)(0x534e0c0)}, TTL:-1, Type:"log", FileStateOS:file.StateOS{Inode:0x1ac37d, Device:0x1000004}}}, Flags:0x1} (status=404): {"type":"index_not_found_exception","reason":"no such index and [action.auto_create_index] ([.security,.monitoring*,.kibana,.watches,.triggered_watches,.watcher-history*,.ml*]) doesn't match","index_uuid":"_na_","index":"filebeat-6.2.4-2018.06.08"}

That's why. It's the same reason as the original error.

Mark,

I somehow got this error fixed now and my ELK stack is back on track. About my initial question, do you suggest deleting the whole ELK stack software and reinstalling it if debugging an issue at sometimes becomes too difficult?

Nope, definitely not.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.