You could also look at all the indices that exist in your cluster to get an idea of where your logs are going. The cat APIs are good for this sort of thing
Just a quick question about all those garbage fields. What is the best way to clean those up?
My question is since they are dynamically picked up by Kibana through logs, even if I modify my logstash config/log formats, Kibana will still pick up those garbage fields right? Since all the old logs are still there. So How would I actually clean it up?
Also, is it a good idea to change index.mapping.total_fields.limit
to a big number just to get it to work for now?
The reindex API could help you clean up your old data. You can write a script that cleans up the documents as they're re-indexed.
Before re-indexing you should create an index template to define your mappings ahead of time.
I don't think index.mapping.total_fields.limit
will help here, Kibana stores your list of fields as a blob of JSON under a single field in a document in .kibana
. From the errors I've seen it looks like you're hitting the max http payload size of either ES or Kibana, which you can configure like I described here.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.