Index associated with ILM policy is later disassociated

Hi,

I'm using ILM, and currently when I created the index it is associted with the ILM policy. I can see the current phase, status, etc.

Currently I:

  1. Write the index template
  2. Write the ILM policy
  3. Create the index, with the is_write_index index alias.
  4. Startup logstash agents

However, after a certain amount of time, our logstash fleet stops writing to ${NAME}-000001 and just writes to ${NAME}. It seems like the rotated index disappeared. I've also seen some logstash agents fall back to writing to logstash-${DATE}.

Is there anything I'm missing? This has happened twice now, for multiple ILM names. Happy to provide more details upon request.

Thanks, Justin

Hi! So what's happening is that you have Logstash configured to write to ${ALIAS}, and it writes for a while into ${INDEX}-000001 via the alias, but then eventually just writes to ${INDEX}? Or am I misunderstanding something? Does this always happen after -000001, or does it happen after a while (e.g. after -000183)?

Could you please post:

  1. The policy you're using that this has happened with (if it's happened with multiple policies, just one is fine)
  2. The index_patterns and settings from one of the templates this has happened with
  3. Your Logstash output config that this has happened with - the bit that looks like:
output {
  elasticsearch {
    // stuff
  }
}

With any sensitive details replaced with placeholders, obviously. That should help troubleshoot.

Hi Gordon,

Thanks for taking a look. Sorry if I wasn't clear -- ILM works as expected for a period of time, writing to ${INDEX}-000001 which has the write alias configured. Then, after a while, ${INDEX}-000001 disappears (deleted, not sure?) and everything writes to ${INDEX} w/o ILM setup. Only our a test cluster (running 7.X) have I seen successful rollover (-002, etc). The current cluster is running 6.7.1 and is the one having issues.

The policy:

{
  "policy": {
    "phases": {
      "hot": {
        "actions": {
          "set_priority": {
            "priority": 80
          },

          "rollover": {
            "max_size": "50GB"
          }
        }
      },
      "warm": {
        "actions": {
          "set_priority": {
            "priority": 50
          },
          "allocate": {
            "number_of_replicas": 1
          },
          "readonly": {},
          "shrink": {
            "number_of_shards": 1
          },
          "forcemerge": {
            "max_num_segments": 1
          }
        }
      },
      "delete": {
        "min_age": "7d",
        "actions": {
          "delete": {}
        }
      }
    }
  }
}

Initial settings on logs-6.7.1-000001

  "settings": {
    "index": {
      "lifecycle": {
        "name": "logs-6.7.1",
        "rollover_alias": "logs-6.7.1"
      },
      "refresh_interval": "30s",
      "number_of_shards": "2",
      "provided_name": "logs-6.7.1-000001",
      "creation_date": "1557876101497",
      "priority": "80",
      "number_of_replicas": "1",
      "uuid": "38vM9NZhRaiSVmdGVtXBtA",
      "version": {
        "created": "6070199"
      }
    }
  },

Settings after "something weird" happens, on the new index logs-6.7.1

  "settings": {
    "index": {
      "refresh_interval": "30s",
      "number_of_shards": "2",
      "provided_name": "logs-6.7.1",
      "creation_date": "1558350177789",
      "number_of_replicas": "1",
      "uuid": "P-aH9wSUS-6EuJqDyoYAhQ",
      "version": {
        "created": "6070199"
      }
    }
  },

Logstash output

        elasticsearch {
          hosts => ["${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}"]
          user => "${ELASTICSEARCH_USERNAME:}"
          password => "${ELASTICSEARCH_PASSWORD:}"
          timeout => 90
          ilm_enabled => "true"
          ilm_rollover_alias => "logs-${LOGSTASH_VERSION:6.X}"
          ilm_pattern => "000001"
          ilm_policy => "logs-${LOGSTASH_VERSION:6.X}"
          manage_template => false
          document_type => "_doc"
        }

Best, Justin

There's nothing in that info that I can see that would point to a cause for this issue. The only thing I can think of is that logs-6.7.1-000001 was somehow deleted and Logstash, in trying to write to logs-6.7.1 (which it expects to be an alias), caused it to be created as an index. This is something that can happen when the index/alias is deleted, there's an open issue on the Elasticsearch repo discussing ways to prevent this from happening.

Does the mapping of the logs-6.7.1 index match the mapping in your template, or is it different? What's the index_patterns on the template you're using and would it match logs-6.7.1?

I'm not sure what would have caused the index to be deleted, though. Is there anything else talking to this cluster that might have done it?

I agree - I think the original index is somehow deleted. I've disabled the process that handles rotating out old indices with date formatted indices (pre ILM). They follow a different naming scheme, but still, potentially it is a loose match. I'll let you know if I see it happen again after this change.

Thanks for the help so far! Best Justin

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.