Hello, I have question about pipeline to pipeline communication when handling back pressure.
My setup is - from the microservices we send api calls as logs via tcp to (codec json) logstash and we have one input pipeline and some grok patterns to match the data then sends to other pipelines to progress the data.
So when ever we do GET API call that log will come to input-tcp and it will check the type of the call and based on the type it will send to multiple pipelines to process that data.
My question is if i take 500 events as a batch and sent to other pipelines to process, if those pipelines takes like total of 10 seconds to output that same batch to es. Due to the back pressure from those 10 pipelines will the output of input-tcp be stalled? Till the older 500 batch is send to es from those 10 pipelines.
logstash version : 17.6
workers 1
batchSize 500
mem is 24gb/jvm 16gb
Thanks!