Does it make sense to pipeline few Logstashs one after another?

We would like to read data using our own input plug-in. The data that we receive should go through the log4j-input plug-in.
We were thinking to have two Logstashs running in a sequence. the output of the first logstash will output to tcp and the second logstash will read it via the log4j-plug-in.
WDYT?

You could do it all with different TCP ports on your inputs and outputs
have an input that does the first process, outputs to local host 9999 then
an input that reads from that and processes the output.

Thanks Joe,
I wonder
(1) if that's a common practice when you need sequential inputs
(2) if it degragtes the resilience LS provides

We would like to read data using our own input plug-in. The data that we receive should go through the log4j-input plug-in.

Why?

We were thinking to have two Logstashs running in a sequence. the output of the first logstash will output to tcp and the second logstash will read it via the log4j-plug-in.

I don't see how that would work. The log4j input expects to receive serialized Java objects over the wire and the tcp output won't be able to produce that.

Our input works as follows: it consumes a message from Kafka which consists of an S3 path. then we read the data from that S3 path. The S3 data is a log4j log, we would like to parse it and generate structured data out of it so we can persist it in ES.
How do you suggest handling this scenario?

Okay, but then the log4j input won't be useful. Ideally you'd be able to read the file paths from Kafka and feed them to the s3 input which fetches those files from S2, but that isn't possible out of the box. I'm not sure what the best solution would be. If log file paths aren't being sent very often (or you have modest latency requirements) maybe you could generate configuration files with s3 inputs pointing to the file paths you received via Kafka. If not I suspect you'll need a custom plugin or script that reads an event from Kafka, fetches the file from S3, and stores it in a directory that Logstash monitors.

What do you think about writing my custom input that receives the path from kafaka ( and additional metadata) then it will read the log from S3 and it will send it to a grok filter which parses the log4j log.
Can I achieve the same structure with grok in comparison to the log4j input?

Again, the log4j input deserializes Java objects from a binary stream (produced by a SocketAppender). It is not useful for parsing plaintext log files.