Shrink ends in red index state

Currently shrinking a series of index patterns and running into a problem suddenly. When shrinking our main index pattern, I suddenly get a red index health. The source index is only 2.2GB, 5 shards, 1 replica and I get this behavior every time with this index pattern.

The new index is created, however no docs or size and just a read state. I'm using Curator and I don't see any errors in the log, even with Debug logs.

The really weird thing is that I've probably shrunk 300 indices in other index patterns without a single issue.

Any input on how to trouble shoot this?

ES v6.3.1
Curator v5.5.4 and v5.6.0

Log:

Configuration:

actions:
1:

action: shrink
description: >-
  Shrink selected indices on the node with the most available space.
  Delete source index after successful shrink, then reroute the shrunk
  index with the provided parameters.
options:
  ignore_empty_list: True
  shrink_node: DETERMINISTIC
  node_filters:
    permit_masters: True
  number_of_shards: 1
  number_of_replicas: 1
  shrink_prefix:
  shrink_suffix: '-cold'
  delete_after: True
  wait_for_active_shards: 1
  extra_settings:
    settings:
      index.codec: best_compression
  wait_for_completion: True
  wait_for_rebalance: True
  wait_interval: 9
  max_wait: -1
filters:
- filtertype: pattern
  kind: prefix
  value: logstash-2018.1
- filtertype: age
  source: creation_date
  direction: older
  unit: days
  unit_count: 100

Just to be clear, it's the newly shrinked index, which is empty and has a red state. Anyone experience this before?

I take it you mean in the Curator logs? I think there will be informative messages in the Elasticsearch server logs.

What operating system are you using?

@DavidTurner - Mix of Windows 2012r2/2016

Yeah, it's a bit strange that there's no errors or warnings. And I've been processing 4 other index patterns without any issues. Could it be an index template thing? Pretty easy to reproduce in my cluster - anything data from the newly created red index, that might help troubleshoot this issue?

Can't believe I missed this in the log - turns out normal template index is not applied, hence total field limit is exceeded:

[2019-02-28T15:09:49,958][WARN ][o.e.i.c.IndicesClusterStateService] [PROD-ELK04] [[logstash-2018.10.23-cold][0]] marking and sending shard failed due to [failed recovery]

org.elasticsearch.indices.recovery.RecoveryFailedException: [logstash-2018.10.23-cold][0]: Recovery failed on {PROD-ELK04}{ze-iS300TQqCaweeIdosfQ}{JhwDiXdDSvG_ZAioANqzcA}{PROD-ELK04.prod.dli}{10.0.20.54:9300}{xpack.installed=true}

Caused by: java.lang.IllegalArgumentException: Limit of total fields [1000] in index [logstash-2018.10.23-cold] has been exceeded

Thanks for helping out.

1 Like

Well spotted, yes that'd explain it.

Turns out to be a known problem for ES6.3 -> https://github.com/elastic/curator/issues/1347

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.