And the new field should be an integer. Everything works as expected, the new field is added, the old field deleted, but the new field is always text.
This configuration worked without problems before the new Logstash version 5.0. We want to update to the newer version, but we get stuck on this configuration problem. Does anyone has an idea how to solve this?
and the second one, we call it summary index and the documents inside look like:
"type" => "SUMMARY",
"@timestamp" => 2016-07-18T13:48:09.614Z,
"@version" => "1",
"duration" => {
"valueType" => "long",
"id" => "3",
"long" => 79 // this attribute is now inserted as String
}
If another entry with the same ID will be logged, the first index will contain another document, but the second will update the existing one. Normally when this happens, the name is different and another structure will be created.
And exactly this functionality worked with Logstash < 5.0 using this code:
valueLong already is an integer field so I don't know why you're doing the conversion in the first place. When you say
... but no change at all. duration.long is still a text.
are you saying that the field in Elasticsearch (and, by extension, Kibana) is a string field? In that case you have to reindex the current index or create a new index. The existing mapping won't change just because you're starting to submit documents with an integer [duration][long] field.
In my example I wanted to show you, what I actually expect in elastic.
I delete all indexes, before every test.
I tested now without the conversion, but no change. Even if the valueLong field is an integer, the duration.long field is a string in the second index.
I still think you have an index template that applies to your new indexes every time. As far as I know Elasticsearch won't default to any dynamic templates so the mappings you have with dynamic templates must come from somewhere.
I do nothing in elasticsearch. I suppose that this is the job of logstash. I just have a log file and the logstash with the configuration that I showed you before. What is strange, that I had no problems before 5.0. So I suppose that something have been changed, but I don't know where.
What is also strange is, that I do a mutate-convert before, for the valueLong and it works.
Now I found the problem in my configuration but not the solution. I copy here just the important part of the configuration (the rest is exactly the same):
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.