Hi Team,
For my issues is per 15mins 24000hits happen in my elasticsearch so it easily filled the storage so, Is there any alternate way to optimize or compress tha data and storage in ELK?
Hi Team,
For my issues is per 15mins 24000hits happen in my elasticsearch so it easily filled the storage so, Is there any alternate way to optimize or compress tha data and storage in ELK?
Have you looked at the official docs around storage optimisation?
How do we check that above solution?
I do not understand what you mean. Have you gone through the recommendations I liked to?
It may help if you provide some more information about you cluster size and configuration and the exact problem you are facing.
how to reduce the size of index without deleting and manage the storage exist?
That is what the guide I linked to is about. If this is not addressing your issue you need to better explain what your issue is. If you are not willing to properly describe the issue you are facing and why the guide I linked to does not help I am afraid I can not help.
official docs around storage optimisation here how to validate this?
because I used to worked with PROD so, If am used the above solution means how will find the compression for index?
You need to test it as it will vary based on your data and requirements. I would recommend using a test environment and copy over a large index from production, e.g. using snapshot/restore. You can then make changes to mappings and the structure of documents through index templates and ingest pipelines and reindex into new indices that use these new templates. This will allow you to evaluate how much space these various measures save you.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.