Logstash with multi port nad multi file not work properly

hello. i start used ELK with:
spring boot 3.1.0
logstash-logback-encoder 7.4
janino 3.1.12
setup elk 8.12.1-1 in linux server centos 7 (ELK server)
my senario is:
i used 2 project (base and auth). and setup logback for projects:
auth send log to ELK server with port 5045
base send log to ELK server with port 5044
my logback-spring.xml also is:

        <appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
            <destination>${logstash-host:- }:${logstash-port:- }</destination>
            <encoder class="net.logstash.logback.encoder.LogstashEncoder">
            </encoder>
        </appender>

and input logstash was two files (tow input file with tow input port and tow index):
auth.config:

input {
	tcp {
		codec => json_lines
		port => 5045
	}
...
output {
...
index => "auth"
...

base.config

input {
	tcp {
		codec => json_lines
		port => 5044
	}
...
output {
...
index => "base"
...

problem:
every log create in auth also show in base and the opposite also happened.
in kibana for auth i see _index correctly (auth) but app name was (base) and port was (5044).
After a lot of research, another idea came to my mind.
i merge auth and base file:

input {
	tcp {
		codec => json_lines
		port => 5044
		type => base
	}
	tcp {
		codec => json_lines
		port => 5045
		type => auth
	}
}
output {
if [type] == "base" {
index => "base"
...
} else if [type] == "auth" {
index => "auth"
.....

and every thing fine.
I hope this bug will be solved soon so that there will be no such problem by using two separate config files.

Welcome to the community!

I don't see a bug here or didn't understand the problem.

If you want to separate data in different outputs, either use IFs in the output or separete .conf file in different pipelines. Additional option is to set index names based metadata,e.g.: index => "%[@metadata][index_prefix]-%{+YYYY.MM.dd}" . Beats use something like: "filebeat-%{[agent.version]}-%{+yyyy.MM.dd}". The datetime part is not mandatory.

The common way is to use IFs and "type" with the single pipeline.

Hello and thank you for your reply.
I created two config files for logging on a server, and I set a different port as input in both files, and the elastic server was shared. But I had set a different index for each output, and I expected the input of each port to go to the index that I had set in the output of that file. Please note that each file has a separate port and a separate index for the output. At runtime, it seemed that behind the scenes these two files were merged together, and therefore entries to each port were sent to both indexes. Just like the information of those two files has been read and placed one after the other in one config file. If that's okay, then it doesn't make sense to define multiple config files when they're supposed to conflict with each other. Unless they must be separated by a type. I hope I have been able to express my meaning, otherwise let me know and I will find a better way to convey my meaning.

Yes, under a single pipeline, everything is merge. It's by design. Explained here, here and here.

Your logic has sense when is a case - single node, 3-5 nodes etc. The enterprise environment is a little bit different. You have data transformation for FB, HTTP, syslog, jdbc from a single "source" - department inside biiiig company. When 2-5 different persons are working on the ETL integration, one big .conf file could be mess. Also don't forget, LS supports the ruby code, execution files,... That is mess, I mean a really mess, the single file with more than several hundreds lines in the filter section.

1 Like