We can see that in a specific 12 hour period, we have 1.3 million log entries. We want to see how much disk space was consumed by this. We thought we could just add 'bytes' as a metric to the graph. However, bytes appears to be greyed out.
How could we obtain this information in another fashion? Could we write something in the dev console?
That will show the entire index so you will need to multiply the ratio of your 12 hours of Docs vs Total Docs
You can also simply run
GET _cat/indices?v&s=store.size:desc
That will also show the size of the Index on Disk ...
It will show you the number of Documents
You can run Discover and get the Number of Documents in 12 hours then do the division.
Remember if you have are replica make sure to use store.size not pri.store.size
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.