Export Saved Objects via REST APi


(David McClain) #1

Hi,

I'm trying to figure out the best way to programatically export all of the saved objects - Saved Searches, Visualizations, and Dashboard - as individual documents.

I want to be able to create an SDLC around the Dashboard development process and I need to be able to capture anything/everything that a final Dashboard is dependent on, and then store that into a repository for versioning.

So far, the only way I've been able to find (at least per the documentation) to export these items is through the Kibana gui. And even then, Doing an 'Export All' creates a single document that contains everything.

I would like to be able to call the REST API to get a listing of the Saved Objects. And then I would like to use that listing to export each one individually into its own file. Is this possible?

As a secondary question to this - how can I make it so that a newly created index, with no data, can successfully use any saved objects regardless of the fields they contain? This would be for when I know what fields my data will contain (Because we've built dashboards against the data), but I'm building out a new environment, and I don't want to wait for the data to come in and create those fields before I can import the Saved Objects that rely those fields?

Edit: Moved to Kibana category

Thanks


Import dashboards etc via command line
(Felix Stürmer) #2

Hi @David_McClain,

I have been in the same situation before and this is how I made it work:

  • I accessed the Kibana index (.kibana by default) directly through the Elasticsearch API, because the internal Kibana import/export API is not documented and I wanted to be able to import the saved objects before Kibana starts.
  • In order to store the state in version control, a custom script would...
    • dump the mappings of the Kibana index (GET ${ES_HOST}/${KIBANA_INDEX}/_mapping)
    • dump the kibana settings (GET ${ES_HOST}/${KIBANA_INDEX}/config/${KIBANA_VERSION}/_source)
    • list the dashboards, visualizations,searches and index-patterns: (GET ${ES_HOST}/${KIBANA_INDEX}/dashboard,visualization,search,index-pattern/_search + some json parsing)
    • dump the dashboards, visualizations, searches and index-patterns (GET ${ES_HOST}/${KIBANA_INDEX}/${OBJECT_TYPE}/${OBJECT_ID}/_source)
  • When provisioning a new cluster and Kibana a script would...
    • load the mappings into the Kibana index (PUT ${ES_HOST}/${KIBANA_INDEX})
    • load the settings and saved objects (PUT ${ES_HOST}/${KIBANA_INDEX}/${OBJECT_TYPE}/${OBJECT_ID})

If you don't want to modify the settings or saved objects, an export tool like elasticdump might be useful. To selectively export only specific saved objects and their dependencies custom code has to be written at the moment.

This setup would allow for provisioning of the Kibana settings and saved objects before Kibana has been started and before the environment has been populated with data.


(David McClain) #3

Awesome. Thanks for your reply.

Followup question:
If I load a bunch of data into an index, and then build dashboards against that data - where it depends on certain fields to exist.

And then I export the mapping (I'm assuming the mapping is updated each time a new field is added), and import that into an empty Elasticsearch, that effectively brings all the fields and their definitions over to the new Elasticsearch, and I should be able to import all the saved objects and use the Dashboard, even though there is no data there right?

This would be an attempt to solve the problem of not being able to import a Dashboard until the field it requires exists.


(Felix Stürmer) #4

Kibana stores the information about the fields of each index pattern in documents of type index-pattern in the the .kibana index. There is no hard requirement of those fields actually existing in the corresponding Elasticsearch data indices (as long as you don't click "refresh field list" in Kibana's management section).

Just to avoid misunderstandings, the mapping I was talking about are the mappings of the .kibana index itself that define the schema of Kibana's saved objects. These mappings need to be imported into the new .kibana index before any saved object is imported, because Elasticsearch might otherwise automatically derive the wrong mappings.

Edited for typos


(system) #5

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.