I am using elasticsearch for some batch processing.
The general gist of steps involved are:
1 Fetche each document from index A
2 Search index B by filtering data using attributes of document returned in step 1.
3. Process the data returned by 2 to compute a metric "mymetric"
4. Update the document returned in 1 by adding a new field "mymetric"
Now the program is working correctly and I can see updated documents have the new field. The problem is that field mappings do not get updated (I think this is to be expected) and so kibana does not pick up the field.
Do I need to update the mappings each time I decide to add a new field to an existing document or is there another way to somewhat automate this?
After I add a new field to an existing document, it does not reflect in Kibana (discover or index pattern or even browsing individual document) but can be seen when doc is fetched using elasticsearch.
I guess this is more of a Kibana issue but my original question was what should the workflow be like when updating an elasticsearch document with a new field? Update mapping first (and template if there is one) and then add the field? Can we perform a blind mapping update or should we first verify that the said mapping does not exist?
Ok so I think having null values was what tripped Kibana. After adding real values, I can see the doc OK in Kibana.
On update, Elasticsearch automatically updates the mapping of the document and Kibana picks up the new mapping. A manual mapping update is not necessary. Just make sure all entries for the attribute are of the same type or future indexing will fail with error.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.