If data type of field is changed in logstash configuration then is it required to create new index every time to get the changes apply?
Ex. Field "pauseTime" is extracted using KV filter. Data type of the field is changed from string to number, changes are getting applied to the rollover index and not to the current index even with continuous log ingestion. This caused conflict issue.
However, I have converted another field from string to date which was extracted using grok pattern, changes reflected immediately.
Except for supported mapping parameters, you can’t change the mapping or field type of an existing field. Changing an existing field could invalidate data that’s already indexed.
If you need to change the mapping of a field in other indices, create a new index with the correct mapping and reindex your data into that index.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.