Hi, I am hitting an error in our infrastructure when exporting more than =< 1000 saved objects (visualizations in particular) from either the API or inside the dev tools. However, when exporting the same objects via a list of IDs with the rest API it works.
The error is as follows
{"statusCode":400,"error":"Bad Request","message":"all shards failed: search_phase_execution_exception: [parse_exception] Reason: failed to parse date field [-9223372036854776000] with format [strict_date_optional_time||epoch_millis]: [failed to parse date field [-9223372036854776000] with format [strict_date_optional_time||epoch_millis]]"}
which seems more like an elastic error and is sort of cryptic to me, especially because all visualizations are exportable via ID.
Decreasing the savedObjects.maxImportExportSize leads to the expected error of reaching the max export size.
Hi @jsanz, the cluster is running on 7.17.6 (es and kibana) so the fix should be included.
It also seems to fix the issue of exporting more than 10,000 saved objects, but here its only 1,000.
Hi sorry my bad I missed a zero on your first post.
I tested on 7.17 and I could repeatedly export and import my saved objects up to 40K. That is, start fresh, import saved objects from filebeat, packetbeat, and metricbeat to have 1.2K objects and then repeatedly export all of them and import them again to double my objects with the Create new objects with random IDs option in the importer.
By default you cannot export more than 10K so I upp'ed my settings like this:
The last export of 20k docs needed a bit of time both ways export and import, but it eventually finished so I ended up with my (bloated) Kibana of 41160 saved objects.
Do you see anything in your logs apart from that line you shared? Did you export from identifiers ALL the saved objects? It seems more related with an specific saved object that crashed the process, rather than the bulk operation.
No that is the problem, I am not seeing any problems apart from the error above.
But it seems unrelated to any restriction on the number of bytes because it happens exactly at the 1,000 mark.
I tested the download of all objects doing with the code below and it works.
Aren't you using the Kibana API for saved objects?
I see you are querying /api/kibana/management/saved_objects/_find instead of /api/saved_objects/_find. Have you noticed the same issue in both APIs? The one you are using is meant to only be used by the Kibana application, not by third party integrations.
yes I do. I used the api only to confirm the error and that the error only happens when querying the all objects api but not when querying the api with individual IDs.
The error from the ui and also from the api is the same however.
At this point I'm afraid I can only suggest for you to open an issue in the Kibana tracker. Please add a link to this discussion, but without a reproducible scenario for the developers I'm unsure how much we can help.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.