Timeouts while deleting

Topic says it:

org.elasticsearch.cluster.metadata.ProcessClusterEventTimeoutException: failed to process cluster event (delete-index [logstash-2015.03.21]) within 30s
at org.elasticsearch.cluster.service.InternalClusterService$2$1.run(InternalClusterService.java:258)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

Is there a way to increase the 30 second time out? Thank you.

What are you using to call the delete? curl? An API call in a language client? Curator?

This is with Curator,

Heh. Curator has a --timeout flag, but for some reason I did not include it in the list of flags in the documentation. I'll be fixing that shortly.

$ curator --help
Usage: curator [OPTIONS] COMMAND [ARGS]...

  Curator for Elasticsearch indices.

  See http://elastic.co/guide/en/elasticsearch/client/curator/current

Options:
  --host TEXT        Elasticsearch host.
  --url_prefix TEXT  Elasticsearch http url prefix.
  --port INTEGER     Elasticsearch port.
  --use_ssl          Connect to Elasticsearch through SSL.
  --http_auth TEXT   Use Basic Authentication ex: user:pass
  --timeout INTEGER  Connection timeout in seconds.
  --master-only      Only operate on elected master node.
  --dry-run          Do not perform any changes.
  --debug            Debug mode
  --loglevel TEXT    Log level
  --logfile TEXT     log file
  --logformat TEXT   Log output format [default|logstash].
  --version          Show the version and exit.
  --help             Show this message and exit.

So basically, you only should need to do:

curator --timeout 60 <rest of command-line>

As a matter of advice, if you're getting a 30 second timeout trying to delete an index, that's an indication something is very busy or wrong in your cluster. It shouldn't take more than a few seconds, at most.

Thanks...ironically this is with using --timeout 240...I still get the 30 second message as above.

What version of Curator are you using?

[14:07:05 @siem:~$] curator --version
curator, version 3.2.0

Just pip'd today.

What is the full command-line you're using?

curator --debug --timeout 240 --host x.x.x.x delete indices --older-than 100 --time-unit days --timestring '%Y.%m.%d'

It does work for say the first few (each daily index is about 4 gigs), and then tanks after the 30 second timeout. I've already deleted all I need to by just rerunning the above a few times...just thought I'd find a way to do it all in one shot instead of a few times. Thank you.

Hmmm. That should work. Can you raise an issue at https://github.com/elastic/curator/issues and attach the debug output?

It also should batch them and delete them all at once (if the list isn't too big, in which case it segments the batch into smaller batches). The fact that it's taking that long to delete them is troubling from a cluster health perspective.

Nevermind. It's not the client timing out. It's Elasticsearch. That's not configurable insofar as I know. As I said, this is indicative of an overtaxed cluster. It shouldn't take this long to delete an index and update the cluster state, hence the error.

How many indices do you have? How many nodes? How many shards per index? You're trying to keep 100 days, so I'm trying to ascertain how big your cluster is, and what's going on.

So, as I understand it, each index is a day, so yes..I am keeping about 100 days. When I run the above command in debug, it would indeed delete about 5 days worth each time. It was almost like the 30 seconds is for the entire operation, not deleting per index. So if I tried to for example just delete one day, it would completely work fine since the entire operation was less then 30 seconds. However if I tried to do say delete 20 days, I'd get about 5 in and then it would fail out...that time took 30 seconds, so again...I suspect this is for an entire operation...which makes sense if it takes 8 seconds to delete a single index, then ya....it would it would only get about 5 until the error. Does that make sense?

It makes sense, but you need to provide the other info that was requested :slight_smile:

Can do...so uh...how do I find out :smiley: I have one node...but I have no idea how to find out how many shards per index..I have 100 indices (one day = one index). How do I find out how many shards I have per index? Thank you.

Take a look at the _cat APIs.

LoL....where would I find those at?

https://www.elastic.co/guide/en/elasticsearch/reference/current/cat.html

Oh cool! Ok...here's one Index:

logstash-2015.06.17 4 p STARTED 490073 384.3mb 127.0.1.1 Mary Jane Watson
logstash-2015.06.17 4 r UNASSIGNED
logstash-2015.06.17 0 p STARTED 489898 381.1mb 127.0.1.1 Mary Jane Watson
logstash-2015.06.17 0 r UNASSIGNED
logstash-2015.06.17 3 p STARTED 489868 384.5mb 127.0.1.1 Mary Jane Watson
logstash-2015.06.17 3 r UNASSIGNED
logstash-2015.06.17 1 p STARTED 489929 382.6mb 127.0.1.1 Mary Jane Watson
logstash-2015.06.17 1 r UNASSIGNED
logstash-2015.06.17 2 p STARTED 489795 381.5mb 127.0.1.1 Mary Jane Watson
logstash-2015.06.17 2 r UNASSIGNED

Is that what we're talking about here?

Yep.
What do _cat/allocation and _cat/indices both show?

Thanks again for looking at this..here's the info:

[16:09:54 @siem:~$] curl 'localhost:9200/_cat/allocation'
505 384gb 1.4tb 1.8tb 20 siem 127.0.1.1 Mary Jane Watson
505 UNASSIGNED
[16:37:38 @siem:~$] curl 'localhost:9200/_cat/indices'
yellow open logstash-2015.05.02 5 1 2958579 0 2.2gb 2.2gb
yellow open logstash-2015.03.27 5 1 3676810 0 2.8gb 2.8gb
yellow open logstash-2015.06.05 5 1 4462236 0 3.4gb 3.4gb
yellow open logstash-2015.06.25 5 1 4854977 0 3.6gb 3.6gb
yellow open logstash-2015.05.07 5 1 2949358 0 2.2gb 2.2gb
yellow open logstash-2015.04.14 5 1 3972213 0 3.1gb 3.1gb
yellow open logstash-2015.04.09 5 1 3952969 0 3gb 3gb
yellow open logstash-2015.06.19 5 1 3399017 0 2.5gb 2.5gb
yellow open logstash-2015.03.31 5 1 3852314 0 3gb 3gb
yellow open logstash-2015.04.16 5 1 4140211 0 3.2gb 3.2gb
yellow open logstash-2015.04.05 5 1 2447008 0 1.9gb 1.9gb
yellow open logstash-2015.04.25 5 1 2760793 0 2.1gb 2.1gb
yellow open logstash-2015.05.01 5 1 4016392 0 3.1gb 3.1gb
yellow open logstash-2015.05.28 5 1 5068366 0 3.7gb 3.7gb
yellow open logstash-2015.05.27 5 1 2363421 0 1.8gb 1.8gb
yellow open logstash-2015.05.24 5 1 2983189 0 2.3gb 2.3gb
yellow open logstash-2015.05.11 5 1 4497458 0 3.5gb 3.5gb
yellow open logstash-2015.03.24 5 1 3954190 0 3gb 3gb
yellow open logstash-2015.04.23 5 1 4185077 0 3.2gb 3.2gb
yellow open logstash-2015.04.04 5 1 2710488 0 2gb 2gb
yellow open logstash-2015.05.12 5 1 4828498 0 3.8gb 3.8gb
yellow open logstash-2015.03.25 5 1 3584903 0 2.9gb 2.9gb
yellow open logstash-2015.05.10 5 1 2761893 0 2.1gb 2.1gb
yellow open logstash-2015.06.13 5 1 2767887 0 2gb 2gb
yellow open logstash-2015.06.24 5 1 5032705 0 3.7gb 3.7gb
yellow open logstash-2015.05.03 5 1 2711810 0 2gb 2gb
yellow open logstash-2015.05.31 5 1 3270329 0 2.5gb 2.5gb
yellow open logstash-2015.05.21 5 1 3255608 0 2.3gb 2.3gb
yellow open logstash-2015.05.15 5 1 5185883 0 3.9gb 3.9gb
yellow open logstash-2015.04.07 5 1 5231331 0 3.4gb 3.4gb
yellow open logstash-2015.04.11 5 1 2538964 0 1.9gb 1.9gb
yellow open logstash-2015.04.15 5 1 3840173 0 3gb 3gb
yellow open logstash-2015.06.12 5 1 4244887 0 3.1gb 3.1gb
yellow open logstash-2015.04.28 5 1 4651532 0 3.6gb 3.6gb
yellow open logstash-2015.06.15 5 1 4334419 0 3.2gb 3.2gb
yellow open logstash-2015.05.18 5 1 5072608 0 3.8gb 3.8gb
yellow open logstash-2015.05.30 5 1 3516188 0 2.7gb 2.7gb
yellow open logstash-2015.06.08 5 1 4584792 0 3.4gb 3.4gb
yellow open logstash-2015.06.23 5 1 4843911 0 3.7gb 3.7gb
yellow open logstash-2015.06.16 5 1 3394755 0 2.5gb 2.5gb
yellow open logstash-2015.04.29 5 1 4561917 0 3.4gb 3.4gb
yellow open logstash-2015.05.25 5 1 3764613 0 2.9gb 2.9gb
yellow open logstash-2015.05.08 5 1 3962875 0 3.1gb 3.1gb
yellow open logstash-2015.05.17 5 1 3001934 0 2.3gb 2.3gb
yellow open logstash-2015.06.06 5 1 3023327 0 2.2gb 2.2gb
yellow open logstash-2015.04.08 5 1 4093319 0 3.2gb 3.2gb
yellow open logstash-2015.06.26 5 1 4655077 0 3.5gb 3.5gb
yellow open logstash-2015.06.29 5 1 4755901 0 3.5gb 3.5gb
yellow open logstash-2015.05.20 5 1 1783880 0 831.5mb 831.5mb
yellow open logstash-2015.06.01 5 1 4665822 0 3.5gb 3.5gb
yellow open logstash-2015.06.09 5 1 4736314 0 3.5gb 3.5gb
yellow open logstash-2015.04.10 5 1 3809384 0 3.1gb 3.1gb
yellow open logstash-2015.04.12 5 1 2570098 0 2gb 2gb
yellow open logstash-2015.03.30 5 1 3991196 0 3gb 3gb
yellow open logstash-2015.06.02 5 1 5006534 0 3.7gb 3.7gb
yellow open logstash-2015.05.04 5 1 4107279 0 3gb 3gb
yellow open logstash-2015.06.21 5 1 2927194 0 2.2gb 2.2gb
yellow open logstash-2015.04.30 5 1 4984603 0 3.6gb 3.6gb
yellow open logstash-2015.03.22 5 1 2676850 0 2.1gb 2.1gb
yellow open logstash-2015.03.28 5 1 2685006 0 2gb 2gb
yellow open logstash-2015.05.14 5 1 5123646 0 3.9gb 3.9gb
yellow open logstash-2015.04.21 5 1 4438434 0 3.4gb 3.4gb
yellow open logstash-2015.04.13 5 1 3894472 0 3.1gb 3.1gb
yellow open logstash-2015.06.17 5 1 2449563 0 1.8gb 1.8gb
yellow open logstash-2015.05.29 5 1 4477989 0 3.3gb 3.3gb
yellow open logstash-2015.04.03 5 1 3639271 0 2.8gb 2.8gb
yellow open kibana-int 5 1 3 0 64.1kb 64.1kb
yellow open logstash-2015.04.02 5 1 3655835 0 2.8gb 2.8gb
yellow open logstash-2015.05.09 5 1 2814500 0 2.1gb 2.1gb
yellow open logstash-2015.04.22 5 1 4493244 0 3.5gb 3.5gb
yellow open logstash-2015.04.17 5 1 3909608 0 3.1gb 3.1gb
yellow open logstash-2015.06.07 5 1 3131635 0 2.3gb 2.3gb
yellow open logstash-2015.05.26 5 1 2265359 0 1.8gb 1.8gb
yellow open logstash-2015.06.04 5 1 4566995 0 3.4gb 3.4gb
yellow open logstash-2015.06.22 5 1 5242101 0 3.8gb 3.8gb
yellow open logstash-2015.06.11 5 1 4284245 0 3.2gb 3.2gb
yellow open logstash-2015.06.20 5 1 2975288 0 2.2gb 2.2gb
yellow open logstash-2015.04.19 5 1 2613993 0 2gb 2gb
yellow open logstash-2015.05.19 5 1 4718377 0 3.3gb 3.3gb
yellow open logstash-2015.04.24 5 1 3999121 0 3.1gb 3.1gb
yellow open logstash-2015.04.18 5 1 2725577 0 2.1gb 2.1gb
yellow open logstash-2015.04.26 5 1 2605657 0 2gb 2gb
yellow open logstash-2015.05.13 5 1 4707497 0 3.6gb 3.6gb
yellow open logstash-2015.05.16 5 1 3159790 0 2.3gb 2.3gb
yellow open logstash-2015.06.10 5 1 4269114 0 3.3gb 3.3gb
yellow open logstash-2015.03.23 5 1 3613063 0 2.9gb 2.9gb
yellow open logstash-2015.05.23 5 1 3100439 0 2.3gb 2.3gb
yellow open logstash-2015.03.26 5 1 3647712 0 2.9gb 2.9gb
yellow open logstash-2015.06.28 5 1 2976518 0 2.2gb 2.2gb
yellow open logstash-2015.04.20 5 1 4066973 0 3.2gb 3.2gb
yellow open logstash-2015.06.18 5 1 1996873 0 1.5gb 1.5gb
yellow open logstash-2015.05.06 5 1 2256924 0 1.7gb 1.7gb
yellow open logstash-2015.04.01 5 1 4113989 0 3.1gb 3.1gb
yellow open logstash-2015.05.05 5 1 3457372 0 2.5gb 2.5gb
yellow open logstash-2015.04.06 5 1 3417357 0 2.7gb 2.7gb
yellow open logstash-2015.06.14 5 1 2742573 0 2gb 2gb
yellow open logstash-2015.05.22 5 1 4377785 0 3.3gb 3.3gb
yellow open logstash-2015.04.27 5 1 4067400 0 3.2gb 3.2gb
yellow open logstash-2015.06.03 5 1 4587761 0 3.5gb 3.5gb
yellow open logstash-2015.06.27 5 1 3311286 0 2.3gb 2.3gb
yellow open logstash-2015.03.29 5 1 2277761 0 1.8gb 1.8gb