Bringing up Dockerized Kibana for distribution including all saved objects

Would appreciate any pointers on this...

Currently we pull the kibana:version image and user docker-compose to create a dev/test environment. Then, when both elastic and kibana are running we run another script to create index-templates on the elasticsearch instance and then do the same for lots of kibana saved objects like visualizations, dashboards, index-patterns, defaults etc.

However, this is somewhat time consuming... additionally if there's an issue with someone's local environment they might not be able to complete all the build steps. We're going to have to consider versioning issues pretty soon and having a container with all the settings preloaded (built and tagged by Jenkins) seems ideal.

It's possible to include the index templates used in our own Elasticsearch containers by starting it, applying the settings via the REST api and then shutting it back down/packaging it.

Has anyone at all managed to achieve a similar result with Kibana, given that (as far as I can tell) it's settings actually end up in the .kibana_1 index in Elasticsearch?

  • Can you manually create this index and POST the various .json files into it using the Elasticsearch API rather than the Kibana REST endpoint?
  • Has anyone managed to do something clever with Docker Build to bring up an Elasticsearch container, install Kibana and use that as an endpoint target, then strip everything out apart from the Elasticsearch innards? (I tried but Kibana won't start)
  • Is doing a two part build process feasible? i.e. start elastic, start kibana, post the settings and just keep the Elasticsearch image? Seems... arduous...
  • Can I take the .kibana_1 index from instance (a) and import it into the Elasticsearch docker container build process somehow?

Would appreciate any pointers on this or other ideas? Maybe someone's already managed to get this to work?

Thanks

Robert Sweetman

I've made some actual progress on this.

Will post the code in Github when I'm happy with it...

Hi @robertiansweetman,

it seems like you are already pretty far down this road, I will add a bit of context, maybe it helps someone. Looking forward to your solution.

  • Can you manually create this index and POST the various .json files into it using the Elasticsearch API rather than the Kibana REST endpoint?

Yes, that's possible - the Kibana container is not stateful, it stores everything in the connected Elasticsearch instance. One thing to keep in mind if you are doing this: The format of the saved objects can change even in minor versions. When using the REST api, the Kibana server will handle this and migrate the saved objects on the fly - if you are loading the index directly bypassing Kibana of course it won't do this. So if you are changing the version, always make sure your objects have the correct format (e.g. by upgrading the container, restarting it and then re-exporting the saved objects and putting them back into the image)

  • Has anyone managed to do something clever with Docker Build to bring up an Elasticsearch container, install Kibana and use that as an endpoint target, then strip everything out apart from the Elasticsearch innards? (I tried but Kibana won't start)

I'm not aware of pre-built solutions on this level, maybe someone else can chime in.

  • Is doing a two part build process feasible? i.e. start elastic, start kibana, post the settings and just keep the Elasticsearch image? Seems... arduous...

It's more complicated but you gain robustness when upgrading (see first point)

  • Can I take the .kibana_1 index from instance (a) and import it into the Elasticsearch docker container build process somehow?

Sounds like the Elasticsearch snapshot apis might be useful here. Still, same problem with migrations here (and possibly other things in the future) - keeping Kibana in the loop (by using the import/export api) means it can fix problems for you

Thanks @flash1293

I really appreciate your input - it validates some of the points my colleague raised too.

I've got Docker Build to bring up an Elasticsearch container, install Kibana inside it, use its REST endpoint target then use the --copy=firstcontainername feature of multi-stage builds to copy the entire /usr/share/elasticsearch/data contents into a second "un-touched" elasticsearch container.

Appreciate that using the Elasticsearch snapshot apis would certainly improve the robustness of this as you've pointed out. Looks like its on me to post what I've done so far! :wink:

@flash1293 take a look.

I'll probably leave this here for now :slight_smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.