Error when exporting more than 1000 saved objects

Hi, I am hitting an error in our infrastructure when exporting more than =< 1000 saved objects (visualizations in particular) from either the API or inside the dev tools. However, when exporting the same objects via a list of IDs with the rest API it works.

The error is as follows

{"statusCode":400,"error":"Bad Request","message":"all shards failed: search_phase_execution_exception: [parse_exception] Reason: failed to parse date field [-9223372036854776000] with format [strict_date_optional_time||epoch_millis]: [failed to parse date field [-9223372036854776000] with format [strict_date_optional_time||epoch_millis]]"}

which seems more like an elastic error and is sort of cryptic to me, especially because all visualizations are exportable via ID.

Decreasing the savedObjects.maxImportExportSize leads to the expected error of reaching the max export size.

Thanks for any help :slight_smile:

What version of the stack are you running. As per comments here the API was improved at PR #89915 and the UI at PR #118335.

Hi @jsanz, the cluster is running on 7.17.6 (es and kibana) so the fix should be included.
It also seems to fix the issue of exporting more than 10,000 saved objects, but here its only 1,000.

Hi sorry my bad I missed a zero on your first post.

I tested on 7.17 and I could repeatedly export and import my saved objects up to 40K. That is, start fresh, import saved objects from filebeat, packetbeat, and metricbeat to have 1.2K objects and then repeatedly export all of them and import them again to double my objects with the Create new objects with random IDs option in the importer.

By default you cannot export more than 10K so I upp'ed my settings like this:

savedObjects.maxImportExportSize: 100000
savedObjects.maxImportPayloadBytes: 104857600
server.maxPayloadBytes: 104857600

The last export of 20k docs needed a bit of time both ways export and import, but it eventually finished so I ended up with my (bloated) Kibana of 41160 saved objects.

image

Do you see anything in your logs apart from that line you shared? Did you export from identifiers ALL the saved objects? It seems more related with an specific saved object that crashed the process, rather than the bulk operation.

Hi sorry for the delayed response.

No that is the problem, I am not seeing any problems apart from the error above.
But it seems unrelated to any restriction on the number of bytes because it happens exactly at the 1,000 mark.

I tested the download of all objects doing with the code below and it works.

ids=$(curl -s "$URL"'/api/kibana/management/saved_objects/_find?perPage=2000&page=1&fields=id&type=visualization' -H 'content-type: application/json' -H 'kbn-version: 7.17.6' --compressed | ./jq -r '.saved_objects[].id')
echo "visualization count is $(echo $ids | wc)"

obj='{"objects": [],"includeReferencesDeep":false}'
while read -r id; do
    obj=$(echo $obj | ./jq '.objects[.objects| length] |= . + {"id":"'"$id"'","type":"visualization"}')
done <<EOF
$ids
EOF

curl -s "$URL"'/api/saved_objects/_export' -H 'kbn-version: 7.17.6' -H 'accept: */*' -H 'content-type: application/json'   --data-raw "$obj" --compressed 

Aren't you using the Kibana API for saved objects?

I see you are querying /api/kibana/management/saved_objects/_find instead of /api/saved_objects/_find. Have you noticed the same issue in both APIs? The one you are using is meant to only be used by the Kibana application, not by third party integrations.

yes I do. I used the api only to confirm the error and that the error only happens when querying the all objects api but not when querying the api with individual IDs.

The error from the ui and also from the api is the same however.

1 Like

At this point I'm afraid I can only suggest for you to open an issue in the Kibana tracker. Please add a link to this discussion, but without a reproducible scenario for the developers I'm unsure how much we can help.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.