We are using kibana to consolidate log files and to merge them into a stream from multiple application nodes. In this usecase we don't need any aggregation, but we want to search for error messages and we would like to export the errors like request body, response body, stack trace, etc. which are fields of the event. Unfortunately these fields can become very long.
I noticed that the export in discover field is failing, if the fields are too long. If I select smaller fields, the export is working.
Next week I can reproduce the error and check if there are any more specific error messages in logs, but maybe you have a hint, if I just need to tweeks some parameters in kibana or elasticsearch.
And also one additional question: is there a way to export in discover module without saving the query first?
When Kibana generates the CSV, it stores it in Elasticsearch for later download. Elasticsearch has a max length of bytes it will allow in a data upload, and I think your failures might be happening because the generated CSV data is too large for Elasticsearch.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.