Logsdb will save about 65% disk spaces but i have received these results (Am I doing something wrong here)

I have tried uploading the simple logs data of windows from direct upload with about 279 KB file size that after ingestion into elastic was about 277.58kb.

named windows_2k_simple that 277.58kb.

same with logsdb setting through index.mode:"logsdb"

windows_2k_logs_db : 180.97kb

so over all it saves about 34% of disk size from normal not 65% but it's also good achievement.

If anyone have tried it and they got 50%-65% then please share your experience as well.

This is not a valid comparison, the dataset is too small.

You will start seeing a difference when you have 10s and 100s of GB.

Hi @kishorkumar

@leandrojmp is exactly correct...

The Hierarchy of an index is
Index -> Shards -> Segments

When you first start writing data to elasticsearch it automatically creates a number of segments to "get ready" to ingest data... at that point there is more overhead than actually data.

You will see the logsdb savings when you scale up ... Event at 1GB or more but definitely at 10Gb.... etc

To be clear there is not a "guaranteed" saving %.... anywhere from 30-70% depending on the type / content of logs... I see mostly 40-60% savings with the data sets I have tested

Also adding that you need also a trial or enterprise license to get the maximum compression ratio. Otherwise you will see some benefits between without logsdb and with it, but not like 40-60% because synthetic source is only available with a trial/enterprise license.