We created a few visualizations based on certain fields. However, after refreshing the page, it reports the following
There is a problem with that saved object
A field associated with this object no longer exists in the index pattern.
If you know what this error means, go ahead and fix it - otherwise click the delete button above.
So I went to settings and refreshed the field lists, those fields came back. But whenever I refreshed the visualization pages, it will report those bugs and the fields are not indexed anymore.
Is there some sort of external process that may be wiping out your index pattern? Nothing in Kibana should modify the field list, besides clicking that refresh button.
I don't think there is any external process that wiped out my index pattern. Initially I thought it's because I didn't parse my log correctly so I changed my logstash configuration (mutate, kv filter), but still no luck
Hmmm, are you using a date pattern based index pattern (like [logstash-]YYYY.MM.DD) or are you using one with a wildcard (like logstash-*)? I believe the problem in the issue you linked to should only affect the former.
If you refresh the field list and then refresh the page (without leaving the index pattern management page) do the fields still show up in the list? I just want to make sure the field list is actually getting persisted in elasticsearch.
If the field list is getting persisted, you could set the elasticsearch slow log threshold to 0 and try to see at what point the the index pattern document is getting modified in .kibana. The ID of the document should be the same as the index pattern name.
It's also always worth asking, are you using a proxy or load balancer that could be returning cached results or switching between clusters?
Ah interesting. If you open your browser's dev tools before refreshing the field list, do you see any errors in the console or any failed requests in the network tab?
That shouldn't be a problem, ES 2.3.5 and Kibana 4.5.4 are compatible.
It's odd that the field list is ever populated in the first place if you're getting a 404 there. You should see 2 - 3 requests in the network tab of your dev tools when you click the refresh fields button. Could you share the full request/response info? Or if there's no sensitive info that you need to omit, you could just capture a HAR.
After clicking on refresh field list button, I also got an 413 error (Request Entity too large). I think it's also worth mentioning that I got around 3900 fields after the refresh. That doesn't look normal.
It looks like you're hitting Kibana and/or Elasticsearch's max payload size. These are configurable via Kibana's maxPayloadBytes and Elasticsearch's http.max_content_length.
But it sounds like you're saying this large number of fields is unexpected? I notice your index pattern is just *. Are you casting too wide of a net and including indices that you don't intend to perhaps? If you look at the response to the GET request for <kibana-url>/elasticsearch/*/_mapping/field/* (before the one that fails) you can see all of the fields from all of the indices that are getting pulled back.
Should I put > https://kibana.logit.io/app/ as our kibana url since we are hosting it on logit.io? Sorry about this dumb question since I am not really familiar with ELK.
The only way to reduce the number of fields is to create a more narrow index pattern. The * index pattern is capturing every index that exists in Elasticsearch. If you just wanted to look at logstash data for instance, you could create a pattern like logstash-*. Just * likely includes things you don't want to see, like kibana's own internal index .kibana.
For some reasons all of our logs that is parsed through logstash will only show up if I change the index pattern to *-*, if I change the index pattern to logstash-*, our logs is not showing up in the kibana and therefore I wouldn't be able to pick up the fields. Is there any settings that I need to do to make sure the logs go to *-*? I tried to set it as default index pattern and still no luck
Also, I think we also have a lot of garbage fields in *-*, and that's why it gets to 4000 fields and exploding.
I would investigate your logstash config to see what indices it's using. It sounds to me like the best way to fix this is by doing some data clean up and ensuring that future data gets indexed into consistently named indices.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.