our server setup looks like this:
We've got an independent Ubuntu PC in our IT mainroom where I'm displaying the data (latest errors and so on...) and where logstash, elasticsearch and kibana are running on. I'm calling this the "local pc" from now on .
Then we've got 3-4 servers where many Docker containers are running on (APIs, frontend webpages, databases...) .
So what I what we're currently doing is creating a cronjob on the local pc, fetching all the logfiles (ssh the servers with docker cp *.log and scp them in a second step to the local pc) every 5 minutes.
From there logstash pipelines are watching the downloaded logfiles and put the data into elasticsearch.
Which works fine - if you only have like 1 or 2 pipelines and the logfiles are kinda small. :-/
But this isn't the case, we also have sometimes 2 docker container for the same project we switch for deployments without downtimes (and the local pc doesn't know, what is really live, so it downloads old apache from the inactive container again and again...) - and well yes - this setup is **** as logstash crashes with java heap errors a lot and this weekend even elasticsearch gave up. We're using log_rotate on the apache logs but they even then they'll reach above 200mb.
So what I'm thinking about is:
A) Installing "filebeat" inside the docker container and they're sending the data directly to the local machine. I hope it works like this and is easily set it up in a deployment queue where the docker container is build.
B) Mounting the log directories of the containers to the host machine, install there filebeat which is sending the files to our local pc.
C) Do it in the way you suggest (if we don't need to change our system landscape for this)
However: If this works, is it possible to separate the different beat inputs (like by index) in the logstash configs - as an apache log format needs to be parsed differently as a symfony monolog entry or a mysql slowquery log. Didn't really see it in the doc - just by using different ports.
As I said all in all we've got like 20 different containers for like 10 separated websites - so I would like to use separated elasticsearch indexes as well. Or is it possible to separate them afterwards in kibana (like currently the "path" field) so I can at least group them in apache, monolog and slowquery logs and then filter by the "path" ?
So far my questions.