We are developing a distributed, docker-based server-application. We run several docker-containers on each host. The idea is to put a logstash-instance (as a docker-container) on each host which publishes to a single elasticsearch-cluster. A kibana frontend should sit on top ot the elasticsearch cluster. Since I am new to logstash and elasticsearch, I have some questions:
1.) Is this setup possible?
2.) Is this setup "good practice"?
3.) Can I monitor my logs on the kibana-frontend in the same way as if I use a single logstash instance or do I have to partition the database in some way?
We want to keep the setup as simple as possible - with HA and scaling in mind.
Our scenario is:
Each processing-host runs the same services (containers) - these services are stateless, if one host fails: Never mind. The next request will be routed to another instance.
There are database-hosts which are (of course) not stateless - but scale well (Cassandra, Elasticsearch)
So in my opinion we have 2 choices: To use a central logstash-instance (and make it high-available) or deploy one instance per host.
The 2nd approach seems much more simple to me:
If the host goes down, logstash is dead too (on this host) but it would not make any differency if I ran it on a different machine: Result: No logs from this server.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.