Multiple docker container for multiple pipeline of logstash

Currently I run 20 pipeline in a system
some just pulls data few times a day
some just once a day
some every few minute

I was thinking to run multiple docker image on a single system, where slower pipeline is running on one. and very active one running by self.

system has 24 core of cpu in a system and 96gig ram.

is it possible by assigning each image a X cpu and Y memory?

You can adjust things like heap space for Logstash, but CPU is a little harder. You may want to look at what docker can provide on that aspect.

busiest pipeline ingest about 3million record in 24 hour. 1.2gig data only
should one cpu or one core per pipeline work?
I do have multiple system where I can do this.

all smaller pipleline I can put it togather with 2-4 cpu and it should do the job.

When you start thinking of multiple containers, you start thinking about orchestration. What are you using to start the containers?

Docker does have its own resource usage constraint system, which uses the Linux Control Groups subsystem.

If you want to look at monitoring; if you're using Prometheus you could look at the likes of cAdvisor (https://github.com/google/cadvisor)

I am in testing phase at this time
used docker build to create my image via dockerfile provided by ELK
using docker run to start it.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.