Dear elastic community,
we are facing some issues with Kibana. The problem started some time ago after an update/migration of our elastic stack. We had a basic elk server with the version 7.3.1.
We migrated the indices to new dedicated elastic servers that should be much faster as the previous one. The indices were migrated with a reindex. The version of the new stack was 7.14.1. Since then the logs Stream under Observability in Kibana is significantly slower.
Some days ago I updated to 7.17.2 and then to 8.1.2, but we still face the same issues.
We face the following issues:
- Loading the
/app/logs/streampage and seeing the first log messages takes ~20s
- When scrolling up in the logs stream it sometimes looks like Kibana is running into a timeout, as the "Loading new entries" disappears and nothing happens after that. We need to reload the page to see the earlier log messages.
Regarding the indices, we are working with.
Almost all indices are created by logstash. We configured logstash/filebeat to put all of the log messages of our docker containers into Elasticsearch. For every container a new index.
I know that is most likely not the best way to do it, but that was configured before I started at this company. As far as I can tell from my research we don't hit a critical count of indices and the indices are still reasonable in their size. The biggest ones are ~50gb. Most indices are <10mb. We have 384 filebeat indices.
I configured the slow log in Elasticsearch to analyze the problem, but I wasn't able to see queries that take a long time.
Does anyone have an idea what I can check next?