Multiple logstash-instances on 1 elasticsearch-cluster

We are developing a distributed, docker-based server-application. We run several docker-containers on each host. The idea is to put a logstash-instance (as a docker-container) on each host which publishes to a single elasticsearch-cluster. A kibana frontend should sit on top ot the elasticsearch cluster. Since I am new to logstash and elasticsearch, I have some questions:
1.) Is this setup possible?
2.) Is this setup "good practice"?
3.) Can I monitor my logs on the kibana-frontend in the same way as if I use a single logstash instance or do I have to partition the database in some way?

Thanks in advance for any reply.

  1. Yes
  2. It's sane, if it works for you then go for it!
  3. Can you elaborate more?

We want to keep the setup as simple as possible - with HA and scaling in mind.
Our scenario is:

  1. Each processing-host runs the same services (containers) - these services are stateless, if one host fails: Never mind. The next request will be routed to another instance.
  2. There are database-hosts which are (of course) not stateless - but scale well (Cassandra, Elasticsearch)
  3. So in my opinion we have 2 choices: To use a central logstash-instance (and make it high-available) or deploy one instance per host.

The 2nd approach seems much more simple to me:

  1. If the host goes down, logstash is dead too (on this host) but it would not make any differency if I ran it on a different machine: Result: No logs from this server.
  2. We don't have to deal with HA-issues.

If logstash itself could bring the machine down this would be of course a problem....

Ahh ok.
They yes that makes sense. LS won't bring down a machine :slight_smile: