Indexes created automatically by the system

Hello everyone,

We have an infrastructure that consists of several Filebeats that send the traces to a Logstash located on another machine where there is also a Metricbeat to plot Logstash.

We have detected that a series of indices have been generated automatically and for which we have not been able to find information, can you please tell us what information is stored in these indexes and if is it possible to remove them without causing errors?

  • .internal.alerts-observability.logs.alerts-default-
  • .metrics-endpoint.metadata_united_default
  • .reporting-
  • .slm-history--
  • .transform-internal-
  • .transform-notifications-
  • ilm-history--
  • logs-index_pattern_placeholder
  • metrics-endpoint.metadata_current_default
  • metrics-index_pattern_placeholder

If you need any additional clarification or more information please let me know.

Regards, and thank you very much,
David

Welcome to our community! :smiley:

These are system created and managed ones, so I wouldn't worry about them at all.

Can you clarify what version of the stack you are running, it should help understand these a bit better.

I think that the main issue is that there is not much information about those system indices, what they are used for and if they can be removed without any risk of breaking something.

I made a similar question a couple of months ago about the slm-history and ilm-history, but got no answer.

One of most repeated recomendations from Elastic is to avoid to have lot of small indices and yet Elasticsearch itself keeps creating those small indices without any explanation if it is safe to remove or not.

It would be nice to have some information in the documentation of what is safe to remove or not depending on which features you use.

I agree, my suggestion there would be to raise something in GitHub so that it can be referred to from here, as it carries more weight.

1 Like

Thanks for the welcome and for the answers :slight_smile:

Can you clarify what version of the stack you are running, it should help understand these a bit better.

I leave the versions of the infrastructure used to clarify the scenario:

  • Filebeat: 7.17.0

  • Logstash: 7.17.3

  • Metricbeat: 7.17.0

  • Elasticsearch deployment: 7.17.1

I think that the main issue is that there is not much information about those system indices, what they are used for and if they can be removed without any risk of breaking something.

The problem with these indices is that by not being able to determine their use and, if possible, their elimination, they grow uncontrollably and a corresponding lifetime cannot be established.

This causes the use of resources and shards to multiply and lead to performance errors.

I appreciate any possible help.

All the best,
David

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.