Not splitting log data fields into terms (logstash and kibana related questions)

(Risto Vaarandi) #1

hi all,
when log data are stored to elasticsearch, fields are split into terms
which breaks term panels in Kibana. This issue is not new and has been
discussed before, and logstash 1.3 has a fix for this:

Before logstash 1.3 was released, I was using config/default-mapping.json
file for creating dynamic mappings for some log data fields, for example:

"default" : {
"dynamic_templates" : [
"template_http_domain" : {
"match" : "http_domain",
"mapping" : {
"type" : "string",
"index" : "not_analyzed"

However, I had issues with this approach, because occasionally (about once
per 1-2 weeks) the content of this file is ignored, and for some reason,
older already deleted mappings were activated. For example, in one of the
configurations from November I was using a mapping for the field
'wwwdomain' which still gets activated from time to time, although this
field name has been erased from config/default-mapping.json long ago.
My first question is: are the old mappings from config/default-mapping.json
kept in Elasticsearch? If so, is there a way to delete them?

My second question is related to logstash 1.3. I really like the new
feature for keeping log data fields unsplit. However, unfortunately I am
not writing into elasticsearch from logstash only, but also using rsyslog
Does the logstash index template also influence logs data which are written
into Elasticsearch with rsyslog, provided I am using the same index names
for both tools?
(If yes, that would help me to get rid of config/default-mapping.json

kind regards,

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
To view this discussion on the web visit
For more options, visit

(system) #2