I recently updated Kibana 7.10.2 to 7.15.0 and with the update/upgrade of Kibana I started noticing the below error in kibana.log when an CSV export running and missing data in the exported CSV file. This error message is sporadically repeated throughout in the kibana.log until the CSV export is completed.
I had no issue with exporting/generating CSV in 7.10.2
The issues that are occuring (probably related):
The CSV file not containing the all of the expected data
Error message / Kibana yellow status
The CSV export are not containing the data that is represented in Discover. For example, between the time span 2020-01-01 00:00:00 to 2021-01-01 00:00:00 , sometimes the exported CSV file contains data from 2021-01-01 00:00:00 to 2020-03-11 00:00:00 (dsc order) - in other words the CSV is not exporting all of the events within the selected time span.
Below error message is occuring in kibana.log while the CSV export is running.
TypeError: Cannot read property 'convert' of undefined at /usr/share/kibana/x-pack/plugins/reporting/server/export_types/csv_searchsource/generate_csv/generate_csv.js:177:42 at Array.map (<anonymous>) at CsvGenerator.generateRows (/usr/share/kibana/x-pack/plugins/reporting/server/export_types/csv_searchsource/generate_csv/generate_csv.js:239:11) at CsvGenerator.generateData (/usr/share/kibana/x-pack/plugins/reporting/server/export_types/csv_searchsource/generate_csv/generate_csv.js:355:14) at runMicrotasks (<anonymous>) at processTicksAndRejections (internal/process/task_queues.js:95:5) at runTask (/usr/share/kibana/x-pack/plugins/reporting/server/export_types/csv_searchsource/execute_job.js:46:12)
In addtion to this error message, the Kibana node that is processing the CSV export goes into "YELLOW status" and a bunch of plugin goes yellow and two of them is:
plugin:reporting services are degraded
plugin:encryptedSavedObjects [security]: [taskManager]: Task Manager is unhealthy
My assumptions is this occurs since there is a lot of error messages generated by the CSV export and the thresehold is surpassed.
I am using the following config for kibana.yml and Elasticsearch.yml that affects the reporting.
xpack.reporting.csv.maxSizeBytes: 209715200 elasticsearch.requestTimeout: 60000 xpack.reporting.queue.timeout: 3000000 xpack.reporting.csv.scroll.size: 1500 xpack.reporting.csv.scroll.duration: 1m
What I have tested
Checked the all of the Elasticsearch.log within the cluster - no indication of error while CSV export is running
Generated .har files to be able to identify if above error message is occuring during the same point in time as a request/fetch is sent - I can only identify that the fetches takes a few ms longer, when the export have been running for about 3-4 min, but I dont think this is an issue.
I performed light user activity during the export to rule out any timeout issues
I used the
curl -X GET "localhost:5601/api/task_manager/_health" -H 'kbn-xsrf: true'to get the status of the taskmgr, before, during and after CSV export and compared the files but not really sure what I am looking for.
I am not getting any error from the Reporting view (/management/insightsAndAlerting/reporting) that the CSV have max out or anything like that
Made exports on the same search and the amount of rows exported to the CSV file varied every time.
Yet to try
- Enable kibana debug and see if it is telling something else
- Does anyone know why I am having this error or experinced something similar?