I'm trying to restore an export but I can't get it right. The export was done via webUI, and the result is 1 file in .json format with all the info about dashboards, visualizations, etc.
I'm trying to use the saved object API to restore it, but I don't know how to use it. I tried things like
using the dev tools in Kibana and excecuting this in there:
POST api/saved_objects/_bulk_create
(copy/paste of the content of the previously exported json)
But I keep getting these errors:
{
"error": {
"root_cause": [
{
"type": "parse_exception",
"reason": "request body is required"
}
],
"type": "parse_exception",
"reason": "request body is required"
},
"status": 400
}
What am I doing wrong? Could you please show me some example of how to do this right?
Thank you.
edit: I just saw this in the front page of the API documentation "
You cannot access these endpoints via the Console in Kibana." So I guess that my error is trying to use the dev tools. Could you tell me how to do this with any other tool please?
I'm now able to do this via ansible and uri module.
---
- name: importa info para la demo
hosts: kibana
tasks:
- name: importa info para la demo
uri:
url: http://localhost:5601/api/saved_objects/_bulk_create
method: POST
body: "{{ lookup('file','../files/test.json') }}"
body_format: json
headers:
kbn-xsrf: true
But only whtn the content of the file is the one of the example given in the elastic kb artcile.
[
{
"type": "index-pattern",
"id": "my-pattern",
"attributes": {
"title": "my-pattern-*"
}
},
{
"type": "dashboard",
"id": "my-dashboard",
"attributes": {
"title": "Look at my dashboard"
}
}
]
If i use the export.json created by the "export" button on the Kibana webUI, I get this error (only showing an extract as real output is way too big for copying/pasting it here:
fatal: [elasticstack01.essi.lab]: FAILED! => {"cache_control": "no-cache", "changed": false, `
"connection": "close", "content": "{"statusCode":400,"error":"Bad Request","message":"\"value\" at position 0 fails because [child \"type\" fails because [\"type\" is required], child \"attributes\" fails because [\"attributes\" is required], \"_type\" is not allowed, \"_id\" is not allowed, \"_meta\" is not allowed, \"_source\" is not allowed]. \"value\" at position 1 fails because [child \"type\" fails because [\"type\" is required], child \"attributes\" fails because [\"attributes\" is required], \"_type\" is not allowed, \"_id\" is not allowed, \"_meta\" is not allowed, \"_source\" is not allowed]. \"value\" at position 2 fails because [child \"type\" fails because [\"type\" is required], child \"attributes\" fails because [\"attributes\" is required], \"_type\" is not allowed, \"_id\" is not allowed, \"_meta\" is not allowed,
The reason you are getting these errors is because there currently isn’t a server side API to import the file directly. At this time the import logic is in the web UI, it reads the file, transforms the objects and calls the create API one object at a time.
As a workaround to get your file working with the bulk_create API, you will have to replicate the front end logic and transform the attributes to what is expected by the bulk_create API.
We are currently working on a server side import and export API that would accept files directly but in the meantime this is the best workaround I can think of if the web UI import isn't suitable.
Ohh ok. This is great I did edit the file to supress the preciding "_", but did not changed the "_source" for "attributes" (I did not know I could do that and still work ) so I was giving up already
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.