If you are asking about a problem you are experiencing, please use the following template, as it will help us help you. If you have a different problem, please delete all of this text
Kibana version: 7.5.2
Elasticsearch version: 7.5.2
APM Server version: 7.6.0
APM Agent language and version: Java 1.12.0
Our ELK stack and apm-servers are deployed using elastic operator and helm charts in kubernetes.
Is there anything special in your setup? We use AWS load balancer in front of APM servers
We have 6 ES data instances each with 15 VCPUs and 30 GIB of RAM, with 15 GIB of HEAP.
We are using EBS volumes, each with 800 GB of storage and 3000 dedicated IOPS.
Description of the problem including expected versus actual behavior. Please include screenshots (if relevant):
Our APM transaction data doesn't seem to compress very well. According to the documentation here (https://www.elastic.co/guide/en/apm/server/current/sizing-guide.html)
Indexing 100 unsampled transactions per second for 1 hour results in 360,000 documents. These documents use around 50 Mb of disk space.
We index around 35000 transactions per second, each hour we send around 126 million documents. At this moment we have 475 million documents in the index which should be around 66GB of data if compressed very well, but the primary index is at 180GB. This is not scalable for us.
Please let us know what we can do to cut down the disk space.
- Thank you