How to use logstash to send longs in real time

Instead of running logstash manually and specify the configuration file and path with: /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf, how can I instruct logstash to run in the background and collect logs all the time?

In conf.d/logstash.conf I have the following content:

input {
  tcp {
    port => 5000
    codec => json
  }
}

output {

  elasticsearch {
    hosts => ["elastic_search_ip_address:9200"]
    index => "my-log"
  }
}

My logstash.yml file is:

http.host: 0.0.0.0
path.config: /usr/share/logstash/pipeline
xpack.monitoring.elasticsearch.hosts: ["elastic_search_ip_address:9200"]

Logstash is installed on server with Debian bookworm/sid and Elasticsearch and Kibana on another one.

The purpose is to watch the logs for errors in real time.

Regards,

You need to run it as a service.

If you installed it using the deb package, then the service is already installed, you need to use systemctl start logstash to start it and systemctl enable logstash to enable it to start when the server reboots.

Keep in mind that when logstash runs as a service it uses thpipelines.yml file to run your pipelines, so you will need to remove the path.config from your logstash.yml file and configure your pipeline in pipelines.yml.

Also, it uses the logstash user to run the service, if you run it before as the root user or using sudo, you may have permissions issues that you will need to fix.

Thank you very much for your prompt reply. I did the changes below but I'm not sure if they are correct, because we still don't receive any logs from the server but the services on it are running.

/etc/logstash/pipelines.yml

# This file is where you define your pipelines. You can define multiple.
# For more information on multiple pipelines, see the documentation:
#   https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html

# - pipeline.id: main
#  path.config: "/etc/logstash/conf.d/*.conf"

- pipeline.id: main
  path.config: "/usr/share/logstash/pipeline/"


/usr/share/logstash/pipeline/logstash.conf

input {
  tcp {
    port => 5000
    codec => json
  }
}

output {

  elasticsearch {
    hosts => ["elastic_search_ip:9200"]
    index => "my-log"
  }
}

Also I double checked the permission and can confirm the user and group is logstash:logstash

Do you know what's the problem?

Regards,

I do not see anything wrong, you need to share Logstash logs to show what it is happening.

Restart the service to get fresh logs and share the logs from /var/log/logstash/logstash-plain.log.

Hi leandrojmp, thank you for your assistance. Below you can find the latest log after logstash rebooting.

sudo tail -f /var/log/logstash/logstash-plain.log
[2024-04-03T08:18:34,397][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Elasticsearch version determined (8.13.0) {:es_version=>8}
[2024-04-03T08:18:34,399][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>8}
[2024-04-03T08:18:34,413][WARN ][logstash.javapipeline    ][.monitoring-logstash] 'pipeline.ordered' is enabled and is likely less efficient, consider disabling if preserving event order is not necessary
[2024-04-03T08:18:34,430][INFO ][logstash.javapipeline    ][.monitoring-logstash] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x2ccf007 /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
[2024-04-03T08:18:35,037][INFO ][logstash.javapipeline    ][.monitoring-logstash] Pipeline Java execution initialization time {"seconds"=>0.61}
[2024-04-03T08:18:35,054][INFO ][logstash.javapipeline    ][.monitoring-logstash] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
[2024-04-03T08:18:35,074][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:".monitoring-logstash"], :non_running_pipelines=>[]}
[2024-04-03T08:18:36,412][INFO ][logstash.javapipeline    ][.monitoring-logstash] Pipeline terminated {"pipeline.id"=>".monitoring-logstash"}
[2024-04-03T08:18:37,101][INFO ][logstash.pipelinesregistry] Removed pipeline from registry successfully {:pipeline_id=>:".monitoring-logstash"}
[2024-04-03T08:18:37,109][INFO ][logstash.runner          ] Logstash shut down.

Regards,
Ivo

/var/log/logstash/

Have you change ownership on the directory and subdirectories?
chown -R logstash:logstash /var/log/logstash/

I don't think your Logstash is running correctly, this is the entire log? You should have more lines in the log, you need to share all the lines.

Also this logs are from April 3th, you need fresh logs, check the answer from @Rios regarding the permissions of the /var/log/logstash path and also the logstash-plain.log file.

If you run logstash as the root user these permissions may not be right.

Thank you @leandrojmp ,

I've already checked the permissions of the folders and files and they are not wrong.

Hi @leandrojmp ,

I ran systemctl restart with sudo and the permissions of the files and folders seems correct: logstash:logstash. As you can see I did tail -f with the service restart and this was all the provided lines from top to bottom of the log.

The logs are from 2024-04-03, they are not recent, you need fresh logs, you need logs from today.

Restart the server again, open the file and copy the logs generated today.

This is strange because all the logs are from then and there aren't any new logs with current date. Is there any other place to check for the logs or parameter in the configurations to see if the path is custom?

Did you check the permissions of /var/log/logstash and the logstash-plain.log file?

If you run logstash as the root user or using sudo before, the permissions will be wrong.

What is the response for systemctl status logstash? If the permissions are wrong the service will not work.

1 Like

Yes, they seems correct. Yesterday I chown recursively the folder too. My logstash-plain.log have correct permissions but the logs are not written there since "Apr 3 " :

drwxr-xr-x   3 logstash logstash      4.0K Apr  9 14:24 logstash

ls -lah /var/log/logstash/logstash-plain.log 
-rw-r--r-- 1 logstash logstash 7.6M Apr  3 08:18 /var/log/logstash/logstash-plain.log

After restart the service status is "active" and there are some JSON parse error because the *.conf file which I'm trying to use is not valid JSON:

systemctl status logstash.service 
● logstash.service - logstash
     Loaded: loaded (/lib/systemd/system/logstash.service; enabled; vendor preset: enabled)
     Active: active (running) since Wed 2024-04-10 13:37:57 UTC; 36min ago
   Main PID: 3994245 (java)
      Tasks: 55 (limit: 2302)
     Memory: 720.0M
        CPU: 1min 37.414s
     CGroup: /system.slice/logstash.service
             └─3994245 /usr/share/logstash/jdk/bin/java -Xms512m -Xmx512m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.compile.invokedynamic=true -XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=fi>

Apr 10 14:14:14 security logstash[3994245]: [2024-04-10T14:14:14,672][WARN ][logstash.codecs.jsonlines][main][e6d8be9da7f73d52c3bcda26708d79587022fb657449e39a23821cf26fa4b952] JSON parse error, original data no>
Apr 10 14:14:14 security logstash[3994245]: [2024-04-10T14:14:14,673][WARN ][logstash.codecs.jsonlines][main][e6d8be9da7f73d52c3bcda26708d79587022fb657449e39a23821cf26fa4b952] JSON parse error, original data no>
Apr 10 14:14:14 security logstash[3994245]: [2024-04-10T14:14:14,673][WARN ][logstash.codecs.jsonlines][main][e6d8be9da7f73d52c3bcda26708d79587022fb657449e39a23821cf26fa4b952] JSON parse error, original data no>
Apr 10 14:14:29 security logstash[3994245]: [2024-04-10T14:14:29,667][INFO ][logstash.codecs.jsonlines][main][e6d8be9da7f73d52c3bcda26708d79587022fb657449e39a23821cf26fa4b952] ECS compatibility is enabled but `>

Regards,
Ivo

I found the logs. They seems to be located in: /usr/share/logstash

[2024-04-10T14:29:22,482][WARN ][logstash.runner          ] SIGTERM received. Shutting down.
[2024-04-10T14:29:22,903][INFO ][logstash.javapipeline    ][.monitoring-logstash] Pipeline terminated {"pipeline.id"=>".monitoring-logstash"}
[2024-04-10T14:29:23,561][INFO ][logstash.pipelinesregistry] Removed pipeline from registry successfully {:pipeline_id=>:".monitoring-logstash"}
[2024-04-10T14:29:26,695][INFO ][logstash.javapipeline    ][main] Pipeline terminated {"pipeline.id"=>"main"}
[2024-04-10T14:29:27,591][INFO ][logstash.pipelinesregistry] Removed pipeline from registry successfully {:pipeline_id=>:main}
[2024-04-10T14:29:27,631][INFO ][logstash.runner          ] Logstash shut down.
[2024-04-10T14:29:46,857][INFO ][logstash.runner          ] Log4j configuration path used is: /etc/logstash/log4j2.properties
[2024-04-10T14:29:46,864][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"8.13.0", "jruby.version"=>"jruby 9.4.5.0 (3.1.4) 2023-11-02 1abae2700f OpenJDK 64-Bit Server VM 17.0.10+7 on 17.0.10+7 +indy +jit [x86_64-linux]"}
[2024-04-10T14:29:46,872][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms512m, -Xmx512m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Dlogstash.jackson.stream-read-constraints.max-string-length=200000000, -Dlogstash.jackson.stream-read-constraints.max-number-length=10000, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED, -Dio.netty.allocator.maxOrder=11]
[2024-04-10T14:29:46,875][INFO ][logstash.runner          ] Jackson default value override `logstash.jackson.stream-read-constraints.max-string-length` configured to `200000000`
[2024-04-10T14:29:46,876][INFO ][logstash.runner          ] Jackson default value override `logstash.jackson.stream-read-constraints.max-number-length` configured to `10000`
[2024-04-10T14:29:47,745][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch configuration. Please explicitly set `xpack.monitoring.enabled: true` in logstash.yml
[2024-04-10T14:29:48,369][INFO ][logstash.monitoring.internalpipelinesource] Monitoring License OK
[2024-04-10T14:29:48,370][INFO ][logstash.monitoring.internalpipelinesource] Validated license for monitoring. Enabling monitoring pipeline.
[2024-04-10T14:29:48,527][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2024-04-10T14:29:49,136][INFO ][org.reflections.Reflections] Reflections took 173 ms to scan 1 urls, producing 132 keys and 468 values
[2024-04-10T14:29:49,335][INFO ][logstash.codecs.json     ] ECS compatibility is enabled but `target` option was not specified. This may cause fields to be set at the top-level of the event where they are likely to clash with the Elastic Common Schema. It is recommended to set the `target` option to avoid potential schema conflicts (if your data is ECS compliant or non-conflicting, feel free to ignore this message)
[2024-04-10T14:29:49,434][INFO ][logstash.javapipeline    ] Pipeline `.monitoring-logstash` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2024-04-10T14:29:49,471][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearchMonitoring", :hosts=>["http://elastic_ip:9200"]}
[2024-04-10T14:29:49,484][INFO ][logstash.javapipeline    ] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2024-04-10T14:29:49,486][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic_ip:9200/]}}
[2024-04-10T14:29:49,505][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elastic_ip:9200"]}
[2024-04-10T14:29:49,516][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Restored connection to ES instance {:url=>"http://elastic_ip:9200/"}
[2024-04-10T14:29:49,517][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Elasticsearch version determined (8.13.0) {:es_version=>8}
[2024-04-10T14:29:49,517][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>8}
[2024-04-10T14:29:49,522][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic_ip:9200/]}}
[2024-04-10T14:29:49,541][WARN ][logstash.javapipeline    ][.monitoring-logstash] 'pipeline.ordered' is enabled and is likely less efficient, consider disabling if preserving event order is not necessary
[2024-04-10T14:29:49,552][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://elastic_ip:9200/"}
[2024-04-10T14:29:49,553][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (8.13.0) {:es_version=>8}
[2024-04-10T14:29:49,553][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>8}
[2024-04-10T14:29:49,572][INFO ][logstash.outputs.elasticsearch][main] Not eligible for data streams because config contains one or more settings that are not compatible with data streams: {"index"=>"security-service-log"}
[2024-04-10T14:29:49,575][INFO ][logstash.outputs.elasticsearch][main] Data streams auto configuration (`data_stream => auto` or unset) resolved to `false`
[2024-04-10T14:29:49,579][INFO ][logstash.javapipeline    ][.monitoring-logstash] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x578fb018 /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
[2024-04-10T14:29:49,603][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x2b95b4bc /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
[2024-04-10T14:29:49,606][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>8, :ecs_compatibility=>:v8}
[2024-04-10T14:29:50,319][INFO ][logstash.javapipeline    ][.monitoring-logstash] Pipeline Java execution initialization time {"seconds"=>0.74}
[2024-04-10T14:29:50,318][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.71}
[2024-04-10T14:29:50,341][INFO ][logstash.javapipeline    ][.monitoring-logstash] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
[2024-04-10T14:29:50,364][INFO ][logstash.inputs.tcp      ][main] Automatically switching from json to json_lines codec {:plugin=>"tcp"}
[2024-04-10T14:29:50,379][INFO ][logstash.codecs.jsonlines][main] ECS compatibility is enabled but `target` option was not specified. This may cause fields to be set at the top-level of the event where they are likely to clash with the Elastic Common Schema. It is recommended to set the `target` option to avoid potential schema conflicts (if your data is ECS compliant or non-conflicting, feel free to ignore this message)
[2024-04-10T14:29:50,505][INFO ][logstash.inputs.tcp      ][main][e6d8be9da7f73d52c3bcda26708d79587022fb657449e39a23821cf26fa4b952] Starting tcp input listener {:address=>"0.0.0.0:5000", :ssl_enabled=>false}
[2024-04-10T14:29:50,507][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2024-04-10T14:29:50,527][INFO ][logstash.agent           ] Pipelines running {:count=>2, :running_pipelines=>[:".monitoring-logstash", :main], :non_running_pipelines=>[]}

I still don't have solution to the problem. After changing the default logs path in logstash.yml to: path.logs: "/usr/share/logstash/logs/"

Now I see loop in the logs:

[2024-04-12T12:41:16,325][ERROR][logstash.javapipeline    ][secure-logs][bceab5a973e034cf6dec3df5939ccc6470975a15689d3af71584ed2a23d7e991] A plugin had an unrecoverable error. Will restart this plugin.
  Pipeline_id:secure-logs
  Plugin: <LogStash::Inputs::Tcp codec=><LogStash::Codecs::JSON id=>"json_9b2dbee3-2303-4349-b708-7af340c1de5b", enable_metric=>true, charset=>"UTF-8">, port=>5000, id=>"bceab5a973e034cf6dec3df5939ccc6470975a15689d3af71584ed2a23d7e991", enable_metric=>true, host=>"0.0.0.0", mode=>"server", proxy_protocol=>false, ssl_enable=>false, ssl_enabled=>false, ssl_client_authentication=>"required", ssl_verify=>true, ssl_verification_mode=>"full", ssl_key_passphrase=><password>, tcp_keep_alive=>false, dns_reverse_lookup_enabled=>true>
  Error: event executor terminated
  Exception: Java::JavaUtilConcurrent::RejectedExecutionException
  Stack: io.netty.util.concurrent.SingleThreadEventExecutor.reject(io/netty/util/concurrent/SingleThreadEventExecutor.java:934)
io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(io/netty/util/concurrent/SingleThreadEventExecutor.java:351)
io.netty.util.concurrent.SingleThreadEventExecutor.addTask(io/netty/util/concurrent/SingleThreadEventExecutor.java:344)
io.netty.util.concurrent.SingleThreadEventExecutor.execute(io/netty/util/concurrent/SingleThreadEventExecutor.java:836)
io.netty.util.concurrent.SingleThreadEventExecutor.execute0(io/netty/util/concurrent/SingleThreadEventExecutor.java:827)
io.netty.util.concurrent.SingleThreadEventExecutor.execute(io/netty/util/concurrent/SingleThreadEventExecutor.java:817)
io.netty.channel.AbstractChannel$AbstractUnsafe.register(io/netty/channel/AbstractChannel.java:483)
io.netty.channel.SingleThreadEventLoop.register(io/netty/channel/SingleThreadEventLoop.java:89)
io.netty.channel.SingleThreadEventLoop.register(io/netty/channel/SingleThreadEventLoop.java:83)
io.netty.channel.MultithreadEventLoopGroup.register(io/netty/channel/MultithreadEventLoopGroup.java:86)
io.netty.bootstrap.AbstractBootstrap.initAndRegister(io/netty/bootstrap/AbstractBootstrap.java:323)
io.netty.bootstrap.AbstractBootstrap.doBind(io/netty/bootstrap/AbstractBootstrap.java:272)
io.netty.bootstrap.AbstractBootstrap.bind(io/netty/bootstrap/AbstractBootstrap.java:268)
io.netty.bootstrap.AbstractBootstrap.bind(io/netty/bootstrap/AbstractBootstrap.java:253)
org.logstash.tcp.InputLoop.run(org/logstash/tcp/InputLoop.java:86)
jdk.internal.reflect.GeneratedMethodAccessor45.invoke(jdk/internal/reflect/GeneratedMethodAccessor45)
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(jdk/internal/reflect/DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:568)
org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(org/jruby/javasupport/JavaMethod.java:300)
org.jruby.javasupport.JavaMethod.invokeDirect(org/jruby/javasupport/JavaMethod.java:164)
RUBY.run(/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-input-tcp-6.4.1-java/lib/logstash/inputs/tcp.rb:192)
RUBY.inputworker(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:414)
RUBY.start_input(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:405)
org.jruby.RubyProc.call(org/jruby/RubyProc.java:352)
java.lang.Thread.run(java/lang/Thread.java:840)
[2024-04-12T12:41:17,329][INFO ][logstash.inputs.tcp      ][secure-logs][bceab5a973e034cf6dec3df5939ccc6470975a15689d3af71584ed2a23d7e991] Starting tcp input listener {:address=>"0.0.0.0:5000", :ssl_enabled=>false}
[2024-04-12T12:41:17,330][WARN ][io.netty.channel.AbstractChannel][secure-logs][bceab5a973e034cf6dec3df5939ccc6470975a15689d3af71584ed2a23d7e991] Force-closing a channel whose registration task was not accepted by an event loop: [id: 0x3cb06fde]
java.util.concurrent.RejectedExecutionException: event executor terminated
	at io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:934) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:351) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:344) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:836) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor.execute0(SingleThreadEventExecutor.java:827) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:817) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.channel.AbstractChannel$AbstractUnsafe.register(AbstractChannel.java:483) [netty-transport-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:89) [netty-transport-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:83) [netty-transport-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.channel.MultithreadEventLoopGroup.register(MultithreadEventLoopGroup.java:86) [netty-transport-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.bootstrap.AbstractBootstrap.initAndRegister(AbstractBootstrap.java:323) [netty-transport-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.bootstrap.AbstractBootstrap.doBind(AbstractBootstrap.java:272) [netty-transport-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.bootstrap.AbstractBootstrap.bind(AbstractBootstrap.java:268) [netty-transport-4.1.100.Final.jar:4.1.100.Final]
	at io.netty.bootstrap.AbstractBootstrap.bind(AbstractBootstrap.java:253) [netty-transport-4.1.100.Final.jar:4.1.100.Final]
	at org.logstash.tcp.InputLoop.run(InputLoop.java:86) [logstash-input-tcp-6.4.1.jar:?]
	at jdk.internal.reflect.GeneratedMethodAccessor45.invoke(Unknown Source) ~[?:?]
	at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
	at java.lang.reflect.Method.invoke(Method.java:568) ~[?:?]
	at org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:300) [jruby.jar:?]
	at org.jruby.javasupport.JavaMethod.invokeDirect(JavaMethod.java:164) [jruby.jar:?]
	at org.jruby.java.invokers.InstanceMethodInvoker.call(InstanceMethodInvoker.java:32) [jruby.jar:?]
	at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:193) [jruby.jar:?]
	at org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:350) [jruby.jar:?]
	at org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:66) [jruby.jar:?]
	at org.jruby.ir.interpreter.InterpreterEngine.interpret(InterpreterEngine.java:82) [jruby.jar:?]
	at org.jruby.internal.runtime.methods.MixedModeIRMethod.INTERPRET_METHOD(MixedModeIRMethod.java:201) [jruby.jar:?]
	at org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:188) [jruby.jar:?]
	at org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:220) [jruby.jar:?]
	at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:466) [jruby.jar:?]
	at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:244) [jruby.jar:?]
	at org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:318) [jruby.jar:?]
	at org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:66) [jruby.jar:?]
	at org.jruby.ir.interpreter.InterpreterEngine.interpret(InterpreterEngine.java:82) [jruby.jar:?]
	at org.jruby.internal.runtime.methods.MixedModeIRMethod.INTERPRET_METHOD(MixedModeIRMethod.java:201) [jruby.jar:?]
	at org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:188) [jruby.jar:?]
	at org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:220) [jruby.jar:?]
	at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:242) [jruby.jar:?]
	at org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:318) [jruby.jar:?]
	at org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:66) [jruby.jar:?]
	at org.jruby.ir.interpreter.Interpreter.INTERPRET_BLOCK(Interpreter.java:116) [jruby.jar:?]
	at org.jruby.runtime.MixedModeIRBlockBody.commonYieldPath(MixedModeIRBlockBody.java:136) [jruby.jar:?]
	at org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:66) [jruby.jar:?]
	at org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:58) [jruby.jar:?]
	at org.jruby.runtime.Block.call(Block.java:144) [jruby.jar:?]
	at org.jruby.RubyProc.call(RubyProc.java:352) [jruby.jar:?]
	at org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:111) [jruby.jar:?]
	at java.lang.Thread.run(Thread.java:840) [?:?]
[2024-04-12T12:41:18,186][INFO ][logstash.javapipeline    ][secure-logs] Pipeline terminated {"pipeline.id"=>"secure-logs"}
[2024-04-12T12:41:18,723][INFO ][logstash.pipelinesregistry] Removed pipeline from registry successfully {:pipeline_id=>:"secure-logs"}
[2024-04-12T12:41:18,734][INFO ][logstash.runner          ] Logstash shut down.

Any idea why?

Regards,
Ivo