thanks for the reply . As you can see in my previous comment, even the elapsed filter will not work .
Does it mean that some of the community filters will need to be modified ?
The problem isn't in the elapsed filter. You can use Logstash with dotted fields with no problems. You just can't send an event to Elasticsearch where any field names contain dots.
You'll need to reformat your data, whether with mutate filters (adding and removing fields as necessary), or at the source.
We've bandied about a "de-dot" filter or codec to cover instances such as these. Perhaps it's more pressing than we thought.
I will try to figure out a fix . The thing is that I am currently using the elapsed plugin and this plugin adds the following fields to the event :
elapsed.time
elapsed.timestamp_start
Elasticsearch doesn't like this and the events will not be indexed . I presume I am not the only person using this filter
As for the other fields, I will use mutate and ruby to create an object out of the field .
I'm going to start work on an addition to the mutate filter right now to "de-dot" fields, but those fields will have to be named.
A de-dot, "shotgun-approach" filter will come afterwards. This will iterate through all fields in the event to catch and change fields. This one will likely be a very expensive operation as it iterates through all, but I expect there will be some who don't know all of the fields which might have dots. This solution will be for them.
Hi Aaron,
We have also the problem with dots in our fields, and unfortunately we can not change the source.
Do you still think about a shotgun approach (maybe in the mutate filter)? This would be very helpful to us.
I just came across this problem too as some dynamic fields are being added with a dot in the fieldname. After attempting to use the mutate filter with no luck I ended up using the ruby filter. I'll paste it below in case it's of use to others.
Hi,
Thank you very much, it works.
Just a little issue with more then one dot in a field -> the ruby code replace just the first dot in a fieldname.
But that is not really a problem, because i insert the ruby filter twice.
Thanks! However, it does look like this filter might have a performance impact (doing what it does).
I reckon we will be reindexing (with field name changes) after all. Unfortunately this also affects another cluster which writes about 10 GiB of data every day. ouch. (might keep a "vintage" mode cluster / parallel software for that, though).
However, the question still remains: why this sort of breaking change when it might have been enough to state that mappings as posted in this gist https://gist.github.com/jpountz/8c66817e00a322b81f85 cannot be mixed?
Would it not have been better to try and fix the underlying cause? (I cannot judge the feasibility of that though!)
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.