We upgraded to 5.4.1 last night from 2.X. Most everything is working, but a few of our visualizations are not. They are receiving the error message "Saved "field" parameter is now invalid. Please select a new field."
I looked through a few of the other threads, however my issue looks to be slightly different.
The common symptom in those other threads is that there is no data in the particular field, so when Kibana runs the field_stats command it doesn't know how to handle it. Once you have data in the index, refreshing the index pattern fixes the issue. However in my case there is data and refreshing the index doesn't solve anything.
Here is what the field looks like in Kibana. Notice that the type is a number, but searchable and aggregatable is disabled. This stops me from be able to take averages of this field in Visualizations.
So no errors are displayed there. So I tried running the _field_caps command. (_field_stats says it was deprecated in 5.4) GET /crm_search-*/_field_caps?fields=duration
And I got very interesting results. The indices are daily.
My index is actually an alias for two separate indices, each with very similar data. However one index has the duration as a double, the other has it as a long. This isn't noticeable in Kibana since it just displays everything as a number.
This index alias and pattern has been around for a while, so I assume that 2.X didn't care about the slightly different data types.
Just for the fun of it I tried running the deprecated field_stats command. So field_stats says I have a conflict between an integer and a float. But field_caps says the conflict is between a double and a float.
Digging through the actual mappings confirms that one is a double while another is a float. I'm not sure where field_stats got Integer from.
If I take the query that Kibana wants to run and run it through DevTools everything works fine.
So Elasticsearch itself doesn't have a problem taking an average when the field is a long and a double, but Kibana doesn't seem to know what to do with it.
Has anyone come across anything similar? Have you found any work arounds besides re-indexing with an updated template? I have around 1 billion documents this has to go through so I would like to avoid reindexing if I can.
Thank you so much for the detail, this is extremely helpful. We have indeed experienced an influx of those field is a required parameter errors in version 5.4 but have so far been unable to track down exactly how to replicate it or what was the causing factor. Thanks to your detailed investigation and post, I suspect this to be the problem.
No worries. I was able to sort of come up with a work around.
I removed the existing aliases linking the two indexes together.
I refreshed the index template in Kibana. Since the two indices are now separated, the field_stats and field_caps saw just one data type for duration. Now the field is searchable and aggregatable.
I added the aliases back.
Do NOT refresh the index.
So now it works like I would expect it to. However if I ever have to refresh the index template it will break. So this is really only a temporary fix.
I imagine this could work elsewhere too. But if your problem is within the same index grouping (logstash-2017.06.12 is a double, but logstash-2017.06.11 is a long) then you will have to force it with a template and then reindex.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.