Hi all,
can someone help me with compression data in ES?
I'm storing around 100 GB per day, with a retention of 90 days.
I need to compress the data.
Already set index.codec: best_compression
Some tips ??
Hi all,
can someone help me with compression data in ES?
I'm storing around 100 GB per day, with a retention of 90 days.
I need to compress the data.
Already set index.codec: best_compression
Some tips ??
The size data takes up on disk depends on the data as well as the mappings used. Have a look at this blog post about how enrichment and mappings affect storage size.
Hi,
this is the json of an entry:
{
"_index": "pippo-2017.09.21",
"_type": "log",
"_id": "AV6jrCxJrdYxfnal_XGx",
"_version": 1,
"_score": null,
"source": {
"@timestamp": "2017-09-21T09:01:07.103Z",
"offset": 5181412256,
"level": "DEBUG",
"@version": "1",
"beat": {
"name": "filebeats-test.farm",
"hostname": "filebeats-test.farm",
"version": "5.6.1"
},
"input_type": "log",
"host": "filebeats-test.farm",
"source": "/opt/logs/jboss/server/default/logs/server.log",
"message": "2017-09-21 11:01:07,103 DEBUG [adapters.pspv.PspvHazelcastAdapter] (MULTICAST_SRV(261-0-PSPV)) FixedOddVO cache updated",
"type": "log",
"tags": [
"beats_input_codec_plain_applied"
]
},
"fields": {
"@timestamp": [
1505984467103
]
},
"sort": [
1505984467103
]
}
For my purpose i need to have "_all" enabled, the timestamp, source and message.
How can i delete all other fields?
I read a lot of doc, but i cannot find a procedure....
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.