Curator request_body documentation error

The current documentation says you can specify an index list like this:

  request_body:
    source:
      index: ['index1', 'index2', 'index3']
    dest:
      index: new_index

In reality, that syntax will result in ES API errors such as:

"description":"reindex from ['index1'] to [index1_r]","start_time_in_millis":1532101508423,"running_time_in_nanos":103798,"cancellable":true},"error":{
"type" : "index_not_found_exception",
"reason" : "no such index",
"resource.type" : "index_or_alias",
"resource.id" : "'index1'",
"index_uuid" : "_na_",
"index" : "'index1'"

I believe, from testing, the correct syntax is without single quotes around the index names.

This page has the bad syntax:
https://www.elastic.co/guide/en/elasticsearch/client/curator/5.x/option_request_body.html#option_request_body

Please submit issues to https://github.com/elastic/curator, rather than the discussion forums.

Sure. I couldn't remember or easily find where to report issues so I reported it here first.

Since it's clear this is from trying to pass an array string as an environment variable now, why are you populating the list of indices this way? Maybe I can help find another way to address this.

That's a long story that I may not have time to fully explain today.

Short version:

Some specific changes to the disk space filters or something similar would go a long way. In general I need to identify stale indices (not recently updated) with inefficient shard sizes (say less than 25GB/shard).

I have to keep all of my data until the cluster is at full capacity (or 85%). I have dense storage, so I'll exceed reasonable shard counts before I run out of raw space unless I condense inefficient indices. About 6.4TB SSD per 128GB RAM.

Currently I wrap curator shrink and reindex jobs in scripts to do extra filtering to identify stale indices and calculate more efficient shard counts given their amount of data.

I've tried extensively to manipulate the "space" filter for this purpose but it really doesn't suit this need.

Long version:

I have an application that for many reasons generates somewhat unpredictable index names. They can all start with a predictable prefix, but will have an unpredictable suffix based on specific requirements.

They'll all look something like: dataset_001_0_12345678

I cannot change the naming convention at this time, it's tied very deeply to how an application works and parent child relationships. That's all going away in a future version as the application moves to a flat structure and can use normal rollover features to avoid this whole situation.

For now, these indices typically start out around 10 primary shards with replicas. Some of them grow to around 250GB (primary size) on average, or 25GB/shard. This is fine. Other ones may only grow to a couple GB or less but still 10 shards each. These indices are created and filled very quickly either way, and the data within them must be retained for months.

Due to unpredictable data from the application, I end up with thousands of shards overall. Maybe 7 to 9 thousand primary shards currently on one example system across 25 nodes.

This is down from a much higher number, with extensive help from curator jobs to shrink indices that were at least around 25GB total from 10 shards down to 1.

But I'm also focused on consolidating lots of smaller indices together, to retain the data but reduce the overall count. I have a requirement to be able to query all of the data and I need the higher shard count for fast writes as the data arrives, but it can be condensed once each index stops updating, even if that impacts the general read performance of particular chunks of data.

So I'm basically trying to either:

  1. Shrink individual indices to average at least 25GB per shard.
  2. Reindex multiple indices to consolidate more shards toward the same goal.

I also have to detect failures outside of curator, clean up the failed indices and try again or move on to another candidate.

For instance, about 1 out of 10 times, curator shrink creates a new index but fails to allocate its shards and exits, leaving that index "red". So we delete and retry. I can open a bug ticket about this later.

For reindex, we verify before and after document counts for confidence before deleting the original indices in favor of the newly condensed one. If the reindex action could measure success and auto-delete inputs, that would also be helpful.

While this would, indeed, be a beautiful thing, it's extremely difficult for Curator to know whether a reindex should contain all of the source documents or not. Or whether it should sum all of the documents from multiple indices or not. Because reindexing from a query is possible, and a query might only be a subset, it's just too difficult for Curator to be able to programmatically guarantee that the source doc count should or will equal the target doc count.

The next major release should permit re-use of the previous results without having to re-sample/re-filter everything again, which might be able to help with this in some way.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.