Dynamic Mapping for Logs

Background:
We have dozens of servers in our system. All of them use log4stash, an extention plugin for log4net, in order to index exception logs into ElasticSearch. The exception message is logged, and log4stash parse it using key/value filter in order to convert the message into key/value fields.

As far of capacity, there is a daily index with 200GB of data for all the servers.

Problem:
We soon found out that the mapping is inconsist in this scenario. It makes sense, as in some days different logs are thrown containing different unexpected fields.

We do want the mapping to be dynamic, so ElasticSearch index all the fields for allowing adding relevant filters in Kibana. Ignoring unexpected fields or storing the log message as one giant string field would make it hard to investigate our exceptions with kibana. This is a classic trade off between performance and maintenance (maybe?), and flexibility.

Question:
We want to know whether this kind of consumption is use or abuse. After all, the indices are daily and the default limitation of max 1000 fields per single index is far from reached.

==============

EDIT:
I found a solution. There is an option for dynamic mapping here.
It is possible to declare "dynamic": false right after the type in the mapping.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.