We're migrating from a 6.8 cluster to a 7.14 cluster and we're seeing some abnormal behavior with Kibana's task manager index. Sorry if this is the wrong forum.
For context, this is a 3-node ES v7.14.1 cluster that as of now we're only writing data to. We noticed that the Search Latency of some nodes was increasing linearly, despite having no clients doing any searches. After checking the indices, we pinned it down to Kibana's Task Manager index. These are the index metrics for the last week:
We also see that the index size right now takes over 200MB, and it keeps growing:
Using cat api, we see that the index has 15 docs and more than 600k pending deletes. I guess those deletes are the reason why the searches take progressively longer?
green open .kibana_task_manager_7.14.1_001 UUID 1 1 15 628919 270.3mb 135.1mb
We could force merge to remove the deleted docs, but it seems a bit silly that an index with 15 docs is taking up over 270 MB of disk, with search requests taking over 100ms.
We'd like to know whether a) this is expected behavior and b) whether there's any setting that can be adjusted to improve the performance of this index, if needed. It might not be an issue but we haven't seen anything similar in our current ES 6.8 cluster and we want to make sure we're gtg before migrating the search clients.