I have a logstash index and already delete_by_query of the part of the indexes, and no more particular fields, then I've done to delete the index pattern to make sure the fields is clean.
The strange behavior of the Kibana Index pattern is still have the fields that already removed in the index.
Is it expected ?
I'm using 7.6.2
Kibana index patterns cache the fields in your indices. If you go to the index pattern management page, there is a reload button for the index pattern to refresh fields.
following the page: Refresh the index fields list. You can refresh the index fields list to pick up any newly-added fields. Doing so also resets the Kibana popularity counters for the fields. The popularity counters are used in Discover to sort fields in lists.
Kibana index patterns use the field caps api to fetch the list of fields from Elasticsearch. It seems like you have to remove the fields from the mapping to prevent them from showing up in the index pattern (which would include re-indexing existing data).
I can think of one workaround. Go to Management > saved objects, select your index pattern and export it. This will give you a JSON file of the index pattern. Go in there and remove the fields you don't want to show up anymore - be careful to not mess up the JSON syntax (no trailing commas and the like). Now re-import the file (this will overwrite the existing index pattern). Now the field should be gone from all UIs (Visualize, Discover). If you are refreshing the index pattern, you will have to do this again.
I'm eager to do it, but there's too complicated, also the result in one line, I'm afraid, even I can modify it, the result is not what i'm expected, because of the complexity.
If in the new release can change to human readable JSON with proper indentation, I'll happy to modify it...
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.