Data/write/bulk running time way too long. How do you cancel?

I have a bunch of odd tasks that are taking up a ton of load on my nodes with no way to cancel them. They definitely seem stuck. All indices are still open, so it's not an issue with the index becoming unavailable. Shard sizes are acceptable as well:

action                                        task_id                          parent_task_id                   type      start_time    timestamp running_time ip            node
indices:data/write/bulk                       QeT_-8R1QqWmbPZlmjuSZA:26954133  -                                transport 1595116834816 00:00:34  15d elasticsearch-opendistro-es-data-1
indices:data/write/bulk                       WyiH88UwRtK_BojPp3Dh6w:27589322  -                                transport 1595203228778 00:00:28  14d elasticsearch-opendistro-es-data-2
indices:data/write/bulk                       WyiH88UwRtK_BojPp3Dh6w:27592661  -                                transport 1595203305447 00:01:45  14d elasticsearch-opendistro-es-data-2
indices:data/write/bulk                       ejhFdLkDQ7KaI9oqkLqArA:31144246  -                                transport 1595203660670 00:07:40  14d   elasticsearch-opendistro-es-data-0
indices:data/write/bulk                       WyiH88UwRtK_BojPp3Dh6w:27639595  -                                transport 1595204219020 00:16:59  14d elasticsearch-opendistro-es-data-2
indices:data/write/bulk                       zOwVB92bSNym7UV3H_JQxA:38376931  -                                transport 1595548813898 00:00:13  10d  elasticsearch-opendistro-es-data-7
indices:data/write/bulk                       WyiH88UwRtK_BojPp3Dh6w:56894289  -                                transport 1595635266464 00:01:06  9d  elasticsearch-opendistro-es-data-2
indices:data/write/bulk                       WyiH88UwRtK_BojPp3Dh6w:63167009  -                                transport 1595721628451 00:00:28  8d  elasticsearch-opendistro-es-data-2
indices:data/write/bulk                       PbutBjI8Qquq3cDLD8Js2Q:61541331  -                                transport 1595807629614 23:53:49  7d  elasticsearch-opendistro-es-data-5
indices:data/write/bulk                       WyiH88UwRtK_BojPp3Dh6w:69671816  -                                transport 1595808092348 00:01:32  7d  elasticsearch-opendistro-es-data-2
indices:data/write/bulk                       zOwVB92bSNym7UV3H_JQxA:51289782  -                                transport 1595808333601 00:05:33  7d   elasticsearch-opendistro-es-data-7
indices:data/write/bulk                       zOwVB92bSNym7UV3H_JQxA:51294119  -                                transport 1595808515014 00:08:35  7d   elasticsearch-opendistro-es-data-7

Any ideas on what can be done to cancel or close these tasks?

Note, all subsequent tasks are completing OK.

Which version of Elasticsearch are you using?

These seems to be related to some periodic job as they appear roughly the same time each day. I am off the top of my head not aware of anything in standard Elasticsearch that would trigger each day aat that time. As you seem to be running OpenDistro I would recommend checking with their community if it may be related to some OpenDistro component.

Hello @Christian_Dahlqvist

7.8.0 ES.

We have a constant stream of documents being written. Timewise, this is about the time when new indices would be created. I don't have any "missing" indices and the indices these jobs are writing to exist. I've tried closing/reopening the index that the task is writing to, but still stuck there in pending.

Other then restarting the node, is there a way to cancel/close these tasks?

What is the output of the cluster stats API? What is the size and specification of the cluster in terms of hardware?

I have never seen bulk requests time out and get stuck like that when a new index is created on standard Elasticsearch, so would not rule out it might have something to do with OpenDistro, which I have no experience with.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.