Field name cannot contain '.'

I'm facing this issue as well. I can't even get my test cluster to start now.

Likely root cause: MapperParsingException[Field name [listing.requestAmount] cannot contain '.']

I'm going to start work on an addition to the mutate filter right now to "de-dot" fields, but those fields will have to be named.

A de-dot, "shotgun-approach" filter will come afterwards. This will iterate through all fields in the event to catch and change fields. This one will likely be a very expensive operation as it iterates through all, but I expect there will be some who don't know all of the fields which might have dots. This solution will be for them.

@radu.stefanache We just published version 3.0.0 of logstash-filter-elapsed. This is a breaking change.

  • All dots in field names and tags have been replaced by _ (an underscore)
  • This means you may need to change some conditionals in your Logstash configuration
  • Some of your Kibana dashboards and/or queries may require change
  • Any other outputs used will have to be adapted to use the new field names and tags.

You can upgrade to this version by doing:

bin/plugin update logstash-filter-elapsed

from your Logstash directory.

1 Like

Hi Aaron,
We have also the problem with dots in our fields, and unfortunately we can not change the source.
Do you still think about a shotgun approach (maybe in the mutate filter)? This would be very helpful to us.

many thanks

I just came across this problem too as some dynamic fields are being added with a dot in the fieldname. After attempting to use the mutate filter with no luck I ended up using the ruby filter. I'll paste it below in case it's of use to others.

filter {
  ruby {
        code => "
          event.to_hash.keys.each { |k| event[ k.sub('.','_') ] = event.remove(k) if k.include?'.' }
        "
    }
}

Hi,
Thank you very much, it works.
Just a little issue with more then one dot in a field -> the ruby code replace just the first dot in a fieldname.
But that is not really a problem, because i insert the ruby filter twice.

many thanks
juergen

Hi,

Ah yes, sorry I don't really know ruby that well. The following will replace all dots:

ruby {
        code => "
          event.to_hash.keys.each { |k| event[ k.gsub('.','_') ] = event.remove(k) if k.include?'.' }
        "
    }

Best regards,
Gary

2 Likes

Hi,

I just came across the same problem

"type":"mapper_parsing_exception","reason":"Field name [data.0.count] cannot contain '.'"}

However, this is a HUGE show-stopper for us, since we are basically "flattening" json data before inserting them into Elasticsearch.

e.g.:
{
  "foo": {
    "bar": "something" 
  }
}

becomes

{ "foo.bar": "something" }

This is VALID Json. You are essentially breaking VALID JSON input.

I am really upset about this, what the heck is the reasoning behind this?
ok, I just found the responsible commit for this:

So gathering from this information: this change won't affect existing indices / field names?

1 Like

The new de_dot filter can turn dotted fields into nested fields.

I'm not sure how it affects existing field names, but you would not be able to re-index that data without changing the field names.

Thanks! However, it does look like this filter might have a performance impact (doing what it does).

I reckon we will be reindexing (with field name changes) after all. Unfortunately this also affects another cluster which writes about 10 GiB of data every day. ouch. :wink: (might keep a "vintage" mode cluster / parallel software for that, though).

However, the question still remains: why this sort of breaking change when it might have been enough to state that mappings as posted in this gist https://gist.github.com/jpountz/8c66817e00a322b81f85 cannot be mixed?

Would it not have been better to try and fix the underlying cause? :smile: (I cannot judge the feasibility of that though!)

@Christopher_Blasnik, because the ability was removed in Elasticsearch 2.0.

See the breaking changes section of the Elasticsearch documentation, under the header, Field names may not contain dots.

I'm assuming this impacts all the geoip items?

Kibana 3.x represents object structures (sub-fields) with dotted notation, but they are still object structures within Elasticsearch.

The GeoIP filter (as in 1.5.x and 2.x) sends an object, not dotted fields.

Ah...ok...that helps then..thank you...I was worried :slight_smile:

Our use case for where "dots" may appear in a field name is after the kv {} filter runs. We don't always know the field names that log sources are sending us. The Ruby code works for us, but an "official" solution would be nice go have.

@bblank There has been some internal discussion on how to better handle dotted fields (in the Elasticsearch team itself, not the Logstash team), but the dust has not yet settled. For the foreseeable future, the official solution is to use the aforementioned de_dot filter.

This ruby solution works "most" of the time for us, but I just found some fields which have "[ ]" in the name and the "dots" are not being replaced. Any ruby coders out there willing to help? e.g. field name = ad.key[12]="some text value"

Do you need the brackets? I would think those would be undesirable. Look into the mutate filter's gsub option. It will allow you to strip square braces.

I used this ruby filter instead of de_dot because my dotted fields are nested under params and I don't know what they are in advance:


filter {
  ruby {
    code => "
      params = event['params'] && event['params'].to_hash
      params.keys.each { |k| params[ k.gsub('.','_') ] = params.delete(k) if k.include?'.' } unless params.nil?
    "
  }
}

I am new to ELK so I may be wrong, but gsub only works on the field contents, not the name of the field. Right?