Indexes failing to auto create from filebeat

I've have an issue with an indexes not being created anymore on more than one occasion. Previously I thought maybe it was because I upgrade from 4.x to 5.x, and installed x-pack(monitoring). Last night/this morning I stood up a brand new LXC container(Proxmox) with nice resources, 16 gigs ram, 24 CPUs. After installing latest and greatest versions of everything I started shipping all my logs over file filebeat, as I was before.

The reason I started from scratch was because of indexes not being created anymore and thought it was just a fluke of an upgraded system. However brand new install, all fresh from scratch, and now today the same thing happened. It created the last several weeks of old data, then two new days of indexes since, installed. Now I get these errors filling up the logs.

I would love to please get some help with this, as there is probably i'm missing/not configured right or something. I have tried my best to do my own research and find a similar issue and fix, but have yet to come across it yet. Looking for any help that somebody might be able to offer in fixing this situation with my ELK stack. Thanks in advance for any and all help, and here is just a "small" sample of all the errors that the logs are spewing out at the moment.

[2017-02-12T01:04:58,392][WARN ][logstash.outputs.elasticsearch] Failed action. {:status=>404, :action=>["index", {:_id=>nil, :_index=>"filebeat-2017.02.12", :_type=>"cowrie", :_routing=>nil}, 2017-02-12T06:04:45.391Z honeypot-2 login attempt [admin/smcadmin] failed], :response=>{"index"=>{"_index"=>"filebeat-2017.02.12", "_type"=>"cowrie", "_id"=>nil, "status"=>404, "error"=>

root@ELK-5:~# curl -XGET http://localhost:9200/_cat/indices?v | sort
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  9398  100  9398    0     0  78853      0 --:--:-- --:--:-- --:--:-- 78974
health status index                           uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   .kibana                         nF3v8b2ZQOO57uCcebVcvw   1   1        103            1    127.6kb        127.6kb
yellow open   .monitoring-data-2              RyOE_GlwTJqfFJZ9s46wUg   1   1          3            0     15.9kb         15.9kb
yellow open   .monitoring-es-2-2017.02.11     IHjkpWYSQZOzJ-bVQqDeOQ   1   1     473484         2934    288.4mb        288.4mb
yellow open   .monitoring-es-2-2017.02.12     aiH2hcFDSba3yU2JL0sOtQ   1   1     137535         6583     95.3mb         95.3mb
yellow open   .monitoring-kibana-2-2017.02.11 jxwB4jFtTsuGWcGjH2vvgg   1   1       6025            0      1.4mb          1.4mb
yellow open   .monitoring-kibana-2-2017.02.12 -O8fJmQiQCqAdLzsHoLeow   1   1       1954            0    984.5kb        984.5kb
yellow open   filebeat                        x-PACOCyQaONUYg0gTNUwQ   5   1          0            0       795b           795b
yellow open   filebeat-2017.01.14             y0KYXJcVS06Xt7AgQaqI7Q   5   1     281786            0    152.6mb        152.6mb
yellow open   filebeat-2017.01.15             pq4tmr3PQP2yrATOnVeIiA   5   1     162663            0     73.6mb         73.6mb
yellow open   filebeat-2017.01.16             jhKjMzPOTxSXZU81sM93nQ   5   1     237110            0    104.9mb        104.9mb
yellow open   filebeat-2017.01.17             rtMQ4mqQS2GPNCwsj1QuMg   5   1     177110            0    156.6mb        156.6mb
yellow open   filebeat-2017.01.18             jEa_fL3HSWWsHyaJO4Rfog   5   1     302615            0    150.7mb        150.7mb
yellow open   filebeat-2017.01.19             DaB3oxUvSxu8c8EP6WdIGQ   5   1     247343            0    119.9mb        119.9mb
yellow open   filebeat-2017.01.20             ZgETOQOySMirfu2pImrSuQ   5   1     122545            0       55mb           55mb
yellow open   filebeat-2017.01.21             1vRBHM2jSRuUEeV_pAEh_Q   5   1     273642            0    146.2mb        146.2mb
yellow open   filebeat-2017.01.22             XsRJuiR3SUqW5Xfp0W3bxQ   5   1     697553            0    295.2mb        295.2mb
yellow open   filebeat-2017.01.23             zeBI2Jh0ReCfecWkKi_dQA   5   1     810327            0    300.4mb        300.4mb
yellow open   filebeat-2017.01.24             3YSiBTMAStqxLTNG6p90nA   5   1     880945            0      354mb          354mb
yellow open   filebeat-2017.01.25             ffoiczpDQCSd26jAv2cSng   5   1    1030999            0    441.7mb        441.7mb
yellow open   filebeat-2017.01.26             BCPdBZZ7QwywLoygXigpUA   5   1    1738132            0    672.3mb        672.3mb
yellow open   filebeat-2017.01.27             UJSKGZrwQKW0PWVKmhB52Q   5   1    1928270            0    743.9mb        743.9mb
yellow open   filebeat-2017.01.28             skt5_EusQHSpW7xmi87bCg   5   1    1998406            0    769.1mb        769.1mb
yellow open   filebeat-2017.01.29             AldYiR0NQRyRKtW2g8qS9A   5   1    2455146            0    923.3mb        923.3mb
yellow open   filebeat-2017.01.30             M8mzYIGKQGS9Vli-gyfVeQ   5   1    1837700            0    697.8mb        697.8mb
yellow open   filebeat-2017.01.31             22A5ssGURQSqB3l1TA7_xA   5   1     375598            0    170.7mb        170.7mb
yellow open   filebeat-2017.02.01             NAcs27oQT9aHplJLesCOIg   5   1     745977            0    316.5mb        316.5mb
yellow open   filebeat-2017.02.02             e7YNKT2rQiWNBEq_z-LiqQ   5   1    1130230            0    449.3mb        449.3mb
yellow open   filebeat-2017.02.03             0tpdRnAMTeuj9wKRwPw5ew   5   1     680916            0    285.1mb        285.1mb
yellow open   filebeat-2017.02.04             TFQm_Ln2TsqZEFYnCjKWUw   5   1    2387610            0    875.3mb        875.3mb
yellow open   filebeat-2017.02.05             lePHaFrOTfCHLq80K96Zww   5   1    2591894            0    969.5mb        969.5mb
yellow open   filebeat-2017.02.06             5KZaCyoeQcKYbzZvI6Vy6A   5   1    1933604            0    702.7mb        702.7mb
yellow open   filebeat-2017.02.07             0dXai26_TradqPH18dd2MQ   5   1    1395649            0    537.4mb        537.4mb
yellow open   filebeat-2017.02.08             T_fHhtooQ1KDjXdRv5VbsA   5   1    3977455            0      1.3gb          1.3gb
yellow open   filebeat-2017.02.09             2V09mj84Su-an5L3jGEVtA   5   1    3028466            0        1gb            1gb
yellow open   filebeat-2017.02.10             NWHLW1ThR6e8dS_4TezoOw   5   1    4132772            0      1.6gb          1.6gb
yellow open   filebeat-2017.02.11             dnmR7cHDRt2AWeNgJmvPOw   5   1    9556289            0      3.7gb          3.7gb

You seem to have a lot of very small shards in your cluster, which is very inefficient. I would recommend reducing the number of primary shards per index to 1.

@Christian_Dahlqvist
How do you reduce them? I've tried reading through the documentation on it, however I can't seem to be able to understand it enough to feel comfortable performing any shard reallocation? Any help would be greatly appreciated, for both how to fix this for the current data, as well as all the new incoming data once I can get indexes being created again.

Would love to still figure out why a daily index creation is failing on my server :frowning:

Have you looked in the logs for clues around why index creation is failing? How many indices/shards do you have in the cluster?

With respect to reducing the number of shards there are a few ways. In order to reduce shards going forward you should create or update an index template and set the number of primary shards to 1. For existing indices you can either reindex or use the relatively new shrink index API.

I've attempted looking in the logs but nothing really seems to glare out at me(might be due to inexperience) As far as indices/shards on this machine I believe it's relatively small, and shouldn't be the issue of why it just stops creating daily indexes, that it did previously.

Here is a copy of the logstash log file after a fresh restart of the service, then starting a client filebeat service to transfer the logs.

[2017-02-12T11:47:42,457][WARN ][logstash.runner          ] SIGTERM received. Shutting down the agent.
[2017-02-12T11:47:42,473][WARN ][logstash.agent           ] stopping pipeline {:id=>"main"}
[2017-02-12T11:47:47,653][WARN ][logstash.shutdownwatcher ] {}
[2017-02-12T11:47:47,655][ERROR][logstash.shutdownwatcher ] The shutdown process appears to be stalled due to busy or blocked plugins. Check the logs for more information.
[2017-02-12T11:48:02,708][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2017-02-12T11:48:02,759][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5045"}
[2017-02-12T11:48:03,019][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2017-02-12T11:48:03,020][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2017-02-12T11:48:03,112][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x25e2792a URL:http://localhost:9200/>}
[2017-02-12T11:48:03,114][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::Generic:0x1288c92c URL://localhost:9200>]}
[2017-02-12T11:48:03,124][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2017-02-12T11:48:03,124][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2017-02-12T11:48:03,149][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x2bf0627f URL:http://localhost:9200/>}
[2017-02-12T11:48:03,151][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::Generic:0x2cb81de3 URL://localhost:9200>]}
[2017-02-12T11:48:03,161][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2017-02-12T11:48:03,162][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2017-02-12T11:48:03,188][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x5286e0fe URL:http://localhost:9200/>}
[2017-02-12T11:48:03,190][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::Generic:0x9e21c49 URL://localhost:9200>]}
[2017-02-12T11:48:03,192][INFO ][logstash.filters.geoip   ] Using geoip database {:path=>"/opt/logstash/vendor/geoip/GeoLite2-City.mmdb"}
[2017-02-12T11:48:03,263][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>24, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>3000}
[2017-02-12T11:48:03,281][INFO ][logstash.pipeline        ] Pipeline main started
[2017-02-12T11:48:03,316][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2017-02-12T11:48:08,222][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[http://localhost:9200/], :added=>[http://127.0.0.1:9200/]}}
[2017-02-12T11:48:08,222][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[http://localhost:9200/], :added=>[http://127.0.0.1:9200/]}}
[2017-02-12T11:48:08,223][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://127.0.0.1:9200/, :path=>"/"}
[2017-02-12T11:48:08,223][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://127.0.0.1:9200/, :path=>"/"}
[2017-02-12T11:48:08,248][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x43c28e1b URL:http://127.0.0.1:9200/>}
[2017-02-12T11:48:08,254][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x62006a32 URL:http://127.0.0.1:9200/>}
[2017-02-12T11:49:18,041][WARN ][logstash.outputs.elasticsearch] Failed action. {:status=>404, :action=>["index", {:_id=>nil, :_index=>"filebeat-2017.02.12", :_type=>"cowrie", :_routing=>nil}, 2017-02-12T15:38:05.918Z honeypot-2 login attempt [root/58.com] failed], :response=>{"index"=>{"_index"=>"filebeat-2017.02.12", "_type"=>"cowrie", "_id"=>nil, "status"=>404, "error"=>{"type"=>"index_not_found_exception", "reason"=>"no such index", "resource.type"=>"index_expression", "resource.id"=>"filebeat-2017.02.12", "index_uuid"=>"_na_", "index"=>"filebeat-2017.02.12"}}}}

Here is a copy of the elastic log after a fresh start of the service.

[2017-02-12T10:50:35,815][INFO ][o.e.n.Node               ] [06QyjUd] stopping ...
[2017-02-12T10:50:36,124][INFO ][o.e.x.w.w.WatchStore     ] [06QyjUd] stopped watch store
[2017-02-12T10:50:36,525][INFO ][o.e.n.Node               ] [06QyjUd] stopped
[2017-02-12T10:50:36,525][INFO ][o.e.n.Node               ] [06QyjUd] closing ...
[2017-02-12T10:50:36,608][INFO ][o.e.n.Node               ] [06QyjUd] closed
[2017-02-12T10:50:42,520][INFO ][o.e.n.Node               ] [] initializing ...
[2017-02-12T10:50:42,627][INFO ][o.e.e.NodeEnvironment    ] [06QyjUd] using [1] data paths, mounts [[/ (/dev/loop4)]], net usable_space [49.1gb], net total_space [98.3gb], spins? [possibly], types [ext4]
[2017-02-12T10:50:42,628][INFO ][o.e.e.NodeEnvironment    ] [06QyjUd] heap size [7.8gb], compressed ordinary object pointers [true]
[2017-02-12T10:50:43,002][INFO ][o.e.n.Node               ] node name [06QyjUd] derived from node ID [06QyjUdsSrqXMpWoZnF8Dg]; set [node.name] to override
[2017-02-12T10:50:43,005][INFO ][o.e.n.Node               ] version[5.2.0], pid[22120], build[24e05b9/2017-01-24T19:52:35.800Z], OS[Linux/4.4.35-2-pve/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_121/25.121-b13]
[2017-02-12T10:50:45,172][WARN ][o.e.x.XPackPlugin        ] the [action.auto_create_index] setting is configured to be restrictive [.security,.monitoring*,.watches,.triggered_watches,.watcher-history*].  for the next 6 months daily history indices are allowed to be created, but please make sure that any future history indices after 6 months with the pattern [.watcher-history-YYYY.MM.dd] are allowed to be created
[2017-02-12T10:50:45,180][INFO ][o.e.p.PluginsService     ] [06QyjUd] loaded module [aggs-matrix-stats]
[2017-02-12T10:50:45,180][INFO ][o.e.p.PluginsService     ] [06QyjUd] loaded module [ingest-common]
[2017-02-12T10:50:45,181][INFO ][o.e.p.PluginsService     ] [06QyjUd] loaded module [lang-expression]
[2017-02-12T10:50:45,181][INFO ][o.e.p.PluginsService     ] [06QyjUd] loaded module [lang-groovy]
[2017-02-12T10:50:45,181][INFO ][o.e.p.PluginsService     ] [06QyjUd] loaded module [lang-mustache]
[2017-02-12T10:50:45,181][INFO ][o.e.p.PluginsService     ] [06QyjUd] loaded module [lang-painless]
[2017-02-12T10:50:45,181][INFO ][o.e.p.PluginsService     ] [06QyjUd] loaded module [percolator]
[2017-02-12T10:50:45,181][INFO ][o.e.p.PluginsService     ] [06QyjUd] loaded module [reindex]
[2017-02-12T10:50:45,181][INFO ][o.e.p.PluginsService     ] [06QyjUd] loaded module [transport-netty3]
[2017-02-12T10:50:45,181][INFO ][o.e.p.PluginsService     ] [06QyjUd] loaded module [transport-netty4]
[2017-02-12T10:50:45,182][INFO ][o.e.p.PluginsService     ] [06QyjUd] loaded plugin [x-pack]
[2017-02-12T10:50:47,007][DEBUG][o.e.a.ActionModule       ] Using REST wrapper from plugin org.elasticsearch.xpack.XPackPlugin
[2017-02-12T10:50:48,592][INFO ][o.e.n.Node               ] initialized
[2017-02-12T10:50:48,592][INFO ][o.e.n.Node               ] [06QyjUd] starting ...
[2017-02-12T10:50:48,822][WARN ][i.n.u.i.MacAddressUtil   ] Failed to find a usable hardware address from the network interfaces; using random bytes: fa:da:fa:ba:fa:2a:6a:6a
[2017-02-12T10:50:49,078][INFO ][o.e.t.TransportService   ] [06QyjUd] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}
[2017-02-12T10:50:49,086][WARN ][o.e.b.BootstrapChecks    ] [06QyjUd] memory locking requested for elasticsearch process but memory is not locked
[2017-02-12T10:50:52,306][INFO ][o.e.c.s.ClusterService   ] [06QyjUd] new_master {06QyjUd}{06QyjUdsSrqXMpWoZnF8Dg}{0-g0JlLSRzq751syRNcruQ}{localhost}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-02-12T10:50:52,342][INFO ][o.e.h.HttpServer         ] [06QyjUd] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}
[2017-02-12T10:50:52,343][INFO ][o.e.n.Node               ] [06QyjUd] started
[2017-02-12T10:50:55,198][INFO ][o.e.l.LicenseService     ] [06QyjUd] license [xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx] mode [basic] - valid
[2017-02-12T10:50:55,211][INFO ][o.e.g.GatewayService     ] [06QyjUd] recovered [73] indices into cluster_state
[2017-02-12T10:51:15,727][INFO ][o.e.m.j.JvmGcMonitorService] [06QyjUd] [gc][27] overhead, spent [268ms] collecting in the last [1s]
[2017-02-12T10:51:23,398][INFO ][o.e.c.r.a.AllocationService] [06QyjUd] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).
[2017-02-12T10:52:09,750][INFO ][o.e.m.j.JvmGcMonitorService] [06QyjUd] [gc][81] overhead, spent [302ms] collecting in the last [1s]

So finally figured out my issue! It was a failure of understanding the instructions of installing X-Pack that caused my issue. On step 3 of installing X-Pack the step says to add

action.auto_create_index: .security,.monitoring*,.watches,.triggered_watches,.watcher-history*

to elasticsearch.yml if you disabled automatic index creation in Elasticsearch. I really didn't understand that step, and simply added that line to my elasticsearch.yml file.

This error didn't show itself right away, as I already had data pumping into logstash, and it already created the index prior to me adding that config change, and restarting the service. It only showed as when the next day's index was to be created, it was no longer allowed due to that option I had set inside of the config.

I have since modified my yml file that now has

action.auto_create_index: .security,.monitoring*,.watches,.triggered_watches,.watcher-history*,filebeat*,logstash*

I'm sure as well I could simply remove that line, and everything would work as well. Unfortunately it was just a misunderstanding as a novice user on setting up the ELK stack and X-Pack that led me down this path. Hopefully if somebody else runs across this issue, they will find this post and can possibly shorten their troubleshooting time on this self induced issue, from a lack of understanding that step.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.