Elasticsearch tuning

I am trying to reduce storage size of index. I tried customising the mapping, shrinking shards, compression algorithms and all but nothing seems to work the size is not getting reduced. Is there anything else that can be done. If any could help it would be great.

Hello, please describe the changes you made, also what you have before.

Adjusting the mappings and changing the compression are two things that will reduce the size of the indices.

How many data do you have?

I have like a index size of minimum 5 - 50 gb logs. I am using Elasticsearch with wazuh so, I have changed the maps for the index and I tried creating a ILM policy in which the shards count is decreased to one by default it is three. There are no replicas used.

Yeah, but what mappings you changed? What were the mappings before, and what are the mappings now?

With mappings which saves space is moving from text to keyword for example, but this depends on each field, some fields needs to be text for be used in some searches.

Also, which compression you are using? If you want to reduce the space you need to use the index.codec as best_compression, this needs to be changed on the template of your indices.

Reducing the number of shards will have little to no impact on the space.

What is the total data you have in your cluster? You won't see basically any changes on small indices.