Hi,
I am absolutely new to ELK and don't get my problem solved by Google... I already figured out where the problem is. But some basic knowledge is missing to solve it...
My problem: I can not filter for a float-Number (0 <= confidence < 0.5). Seems like it is handeled as int.
The Reason: It is mapped as long instead of float. I can see it in the mapping-section of the Elasticsearch Index.
This gives me: mapper [data.nlu.intent.confidence] cannot be changed from type [long] to [float]
Okay, so how do I solve this? And how to make this persistent?
Currently only one application sends all its data to logstash which forwards it all. Why is there a new index every day? Where does it come from?
Sorry for the newbie problem. I am not sure where to start...
As you already said, the field is already mapped to a long in the existing index. This is likely due to having sent a first document where the automatic mapping rules detected the field value as being numeric, but not floating point. In this case the field is mapped as a long.
Mappings for existing fields cannot be changed for an index. I'd recommend creating a complete mapping and storing it in an index template before sending the first data document to an index.
While you can configure Logstash to format numbers in either integer or floating point format, I'd rather make sure that the correct mapping is part of an index template. Then, regardless of the source setting the correct datatype will be applied for a given field.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.