Many active indices:admin/get tasks

Hi all,

Have a strange issue where we have around 1500 active tasks on our 5 node 5.2 ES cluster. They are all on the same node (current master, and endpoint for Kibana/Monitoring) and go back for days.

Any ideas what these could be? I tried to do a bulk cancel on them to no avail.

curl "http://<server>:9200/_cat/tasks?v&h=action,task_id,parent_task_id,type,timestamp"
...
indices:admin/get I_eYf5eiTX-uwkuoO2huEA:26400939 - transport 09:58:17
...

I'm not sure if it's causing an issue, but it's not happening on our 2.3 cluster.

Any help greatly appreciated!

Dave

Do you have a transport client connected to the cluster?

Not quite sure what would constitute as a transport client. We do use the readonlyrest plugin though, although that's on all nodes in the cluster, not just the master (where I'm seeing this).

What does _cat/nodes show?

ip           heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
172.16.16.56           21          97   0    0.01    0.03     0.05 mdi       -      nodeb
172.16.16.46           27          98   0    0.00    0.01     0.05 mdi       -      noded
172.16.16.73           74          98   0    0.00    0.01     0.05 mdi       -      nodee
172.16.16.13           43          98   0    0.00    0.01     0.05 mdi       -      nodec
172.16.16.23           24          98   1    0.03    0.04     0.05 mdi       *      nodea

One other thing I've noticed with the tasks - they appear to be every minute, and there are 4 of them for each minute.

I've just check, this is actually happening on our single-node 5.3.2 instance too. Any ideas what might be doing it?

I've done a bit more digging on this - it appears to be related to http auth, readonlyrest and Sense. When you are prompted for user/password it's logged as a failed attempt in the ROR logs, but the indices:admin/get task hangs around.

I've raised it to readonlyrest but I'm not sure if this is an ES bug or ROR - the problem is I can't cancel the tasks in ES.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.