Index not rolling over since upgrading to 7.6

On earlier versions of Logstah and Elastic search a new index would be created every day with the naming in the form of logstash-%{+YYYY.MM.dd}. After upgrading 7.6 from 6.8 indexes started be created with errors like:

illegal_argument_exception: index.lifecycle.rollover_alias [logstash] does not point to index [logstash-2020.01.30-000001]
illegal_argument_exception: index.lifecycle.rollover_alias [logstash] does not point to index [logstash-2020.02.07]
illegal_argument_exception: index name [logstash-2020.02.12] does not match pattern '^.*-\d+$'

These errors seem to disappear and return, as seen in Kibana Index Management.

Worse, from my point of view, is that since the 12th of February the new indexes have not been created. All logs are going to logstash-2020.02.12. So this one index is getting larger.

I am running logstash version

logstash 7.6.0
jruby 9.2.9.0 (2.5.7) 2019-10-30 458ad3e OpenJDK 64-Bit Server VM 25.242-b08 on 1.8.0_242-8u242-b08-0ubuntu3~18.04-b08 +indy +jit [linux-x86_64]
java 1.8.0_242 (Private Build)
jvm OpenJDK 64-Bit Server VM / 25.242-b08

and elasticsearch version

Version: 7.6.0, Build: default/deb/7f634e9f44834fbc12724506cc1da681b0c3b1e3/2020-02-06T00:09:00.449973Z, JVM: 13.0.2

My logstash configuration for elasticsearch is

output {
  elasticsearch {
    hosts =>  ["http://host1:9200", "http://host2:9200", "http://host3:9200", "http://host4:9200"]
   template_overwrite => true
  }
}

(Name of hosts changed)

I believe that this has something to with the Index Lifecycle Management which is enabled by default in 7.6. It seems to want to write to an index called logstash despite ilm_rollover_alias not being explicitly set.

I am not sure how to proceed.

  • Do I set index explicitly?
  • Do run logstash with the --setup option?
  • Is there somewhere in logstash to edit that pattern to that this last index matches?
  • Are there elasticsearch queries or operations I need to perform?

Finally, when it comes to index aliases, is there a difference between a rollover alias and a write alias and is this alias logstash trying to be both?

Thank you.

As an experiment I tried changing my configuration to this:

output {
  elasticsearch {
    hosts =>  [{% for target_host in elasticsaerch_hosts %}"http://{{ target_host}}:9200"{% if not loop.last %}, {% endif %}{% endfor %}]
    index => "logstash-%{+YYYY.MM.dd}"
  }
}

My reasoning is this:

Logstash will either write to value indicated by index or the value indicated by ilm_rollover_alias. The documentation (ilm_rollover_alias) says that if both are specified then ilm_rollover_alias takes precedence. I had neither specified and had assumed that mean the default of index (logstash-%{+YYYY.MM.dd}) would be at play.

Instead logstash is writing to logstash-2020.02.12 which has the write alias logstash (the default value of ilm_rollover_alias).

I though by explicitly setting index (to the default) it hopefully take precedence.

This does not seem to be the case. logstash still seems to be writing the one index.

My understanding is that that is no longer the effective default when ILM is enabled. If you want to disable ILM then ... disable ILM :slight_smile:

I am correcting my original comment here as it is wrong.

So my next experiment was to, as @Badger suggested, to turn ilm off. My configuration now looks like

output {
  elasticsearch {
    hosts =>  [{% for target_host in elasticsaerch_hosts %}"http://{{ target_host}}:9200"{% if not loop.last %}, {% endif %}{% endfor %}]
    ilm_enabled => "false"
  }
}  

A new index has been created so i don't a huge growing index any more.

I still don't understand the behaviour I had been seeing. And given that ILM seems to the direction of the elasticsearch I'm not sure that turning it off is a long term solution.

Will old indexes still expire or will my disk fill?

Assuming at it ILM dos generally work can anyone give a hint as what is wrong with my set that prevents it from working?

Very clearly I am missing something important here. Otherwise I thing there would be a lot of people with the same problem. But what?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.