Optimizing elasticsearch index

Are there any articles on how to optimize elastic search index in 6.1, currently with the default setting and 600 MB of syslog data my index size is around 1.7G this is way larger than the actual file size. I disabled _all fields, remove unwanted fields that has been created by "geoip" and "agent" parsing, enabled best_compression this brought down the file size to 1.72G from 1.85G is there any other setting that can get the index size to reduce?

What's your mapping look like?

Are you using the default dual text/keyword mapping or have you optimised this? How much extra data are you adding to the event through enrichment, e.g. geoip and user agent processing. Have you run a forcemerge on the index to reduce the number of segments to 1?

Here's how my Index Mapping looks like

Geo IP is definitely being added to the data.

It looks like you are using the default mapping for most fields, which means that all text fields are stored in analysed form for free text search as well as keyword. This gives you a lot of flexibility, but will take up more space . You can read about how to address this in the documentation.

Thanks that helped got the index size to drop by around 20%

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.