Production level Elasticsearch cluster using Docker


I know that industry is dockerising everything - and they would dockerise Docker if they could - but the ease of commissioning new environment using Docker especially in a DR scenario is very appealing.

So my questions:

  1. What is the official current stance on running badass production cluster on Docker? Is it good practice? Issues?

  2. Have you had any experience or known of such in your colleagues, etc?


We are using Elasticsearch, Logstash and Kibana stack with Docker on production to monitor several application servers. They are not badass at all, but we have ~ 2Gb logs data + ~ 1Gb topbeat and packetbeat statistics per day.

Everything working good so far. We had no docker-specific problems except initial configuration, which wasn't too painfull i should say.

You can check out my simple ready-to-go Docker configuration for ELK stack:

You can already Dockerize Docker ^^

If you are comfortable with Docker (and Swarm), use Docker of course, there is no restriction.

The only thing you must pay attention is how configure your cluster if you care about HA:

  • X data nodes
  • At least 3 master nodes (2 passives, 1 active)
  • X HTTP API nodes (no-master, no-data nodes)

I don't see best approach, you can:

  • Have 1 base image, and 3 images for each configuration cases (elasticsearch.yml file)
  • Only 1 image, embed different elasticsearch.yml, use docker-compose env. to select the node type you want
  • Only 1 image, and mount elasticsearch.yml for each use cases