Logstash-output-syslog never writes anything to local syslog on REL8 in podman with podman-compose

I have ELK in my podman-compose setup and I can push any logs I get to Logstash through to Elasticsearch just fine. It is when I am trying to also send to syslog for those that already have a process to pull all syslog data that my setup is just not working. I never get anything written to syslog. And the plugin is there when I run the plugin listing command for logstash.

I separated out the configuration below and I never get anything to syslog at all on my REL 8.7 machine. If I use the regular "logger xxxxx" type of command any message goes to syslog fine from the terminal window. It is not blocked on the TCP port via firewalld and the syslog conf file already listens out for that port.

I have tried 514 in just a number as well as wrapped with "" and no go. I am at a loss what I am doing wrong these past 5 hours. So reaching out here if anyone has any suggestions, nuances, etc. The "192.168.x.x" is the host IP to capture the logs that is running rsyslog.

input {
   http {
      port => "5000"
output {
   syslog {
     host => "192.168.13.xxx"
     port => 514
     protocol => "tcp"
     ssl_verify => "false"

Output after I stand up my compose YML with all the pieces is below for the logstash container running:

Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2022-12-20T21:55:43,359][INFO ][logstash.runner          ] Log4j configuration path used is: /usr/share/logstash/config/log4j2.properties
[2022-12-20T21:55:43,385][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"8.5.3", "jruby.version"=>"jruby (2.6.8) 2022-10-24 537cd1f8bc OpenJDK 64-Bit Server VM 17.0.5+8 on 17.0.5+8 +indy +jit [x86_64-linux]"}
[2022-12-20T21:55:43,391][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -Djruby.jit.threshold=0, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Dls.cgroup.cpuacct.path.override=/, -Dls.cgroup.cpu.path.override=/, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED]
[2022-12-20T21:55:43,462][INFO ][logstash.settings        ] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2022-12-20T21:55:43,496][INFO ][logstash.settings        ] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2022-12-20T21:55:43,953][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2022-12-20T21:55:44,005][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"752f6716-0499-486e-8e18-d4de20639068", :path=>"/usr/share/logstash/data/uuid"}
[2022-12-20T21:55:46,039][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2022-12-20T21:55:46,951][INFO ][org.reflections.Reflections] Reflections took 150 ms to scan 1 urls, producing 125 keys and 438 values
[2022-12-20T21:55:47,665][INFO ][logstash.javapipeline    ] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2022-12-20T21:55:47,889][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, "pipeline.sources"=>["/usr/share/logstash/config/logstash.conf"], :thread=>"#<Thread:0x581cbf5e run>"}
[2022-12-20T21:55:48,670][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.77}
[2022-12-20T21:55:48,768][INFO ][logstash.codecs.json     ][main] ECS compatibility is enabled but `target` option was not specified. This may cause fields to be set at the top-level of the event where they are likely to clash with the Elastic Common Schema. It is recommended to set the `target` option to avoid potential schema conflicts (if your data is ECS compliant or non-conflicting, feel free to ignore this message)
[2022-12-20T21:55:48,934][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2022-12-20T21:55:48,952][INFO ][logstash.inputs.http     ][main][a762021a90dbee39a30ff192cc8a3d1d076e61fe7a6b267a6ccd6945a3f50433] Starting http input listener {:address=>"", :ssl=>"false"}
[2022-12-20T21:55:49,055][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}

According to log, your LS running fine.
It's not visible do you receive data at all inside LS. Use ruby debugger in output.
According to the documentation, no need "ssl_verify" for protocol => "tcp".
What is on other side? Are you sure there is a listener on port 514. Also use tcpdump to monitor a traffic between LS and the host 192...

Thank you for the quick reply.

I do see data inside Logstash and pushed to Elasticsearch when I include the Elasticsearch area of the output. So it is getting data. I just cannot get the stdout to work.

I will remove the ssl_verify thank you.

The port 514 in my RedHat Linux 8.7 is running and the .conf file I uncommented for the syslog service to listen to TCP and port 514. Then restarted the service and set it to auto-start as well.

I need to do the ruby debugger in my output and do the tcpdump. This is all running on the same host machine within a podman-compose setup. I also enabled the port 514 on the firewalld just in case, even though it is all on the same machine.

Where is your destination syslog? In a container or in the host machine?

There is nothing wrong in your logstash configuration, if you are not receiving the logs you may have some connectivity error that you need to troubleshoot.

Try with tcpdump on the destination host:
tcpdump -vv -i eth< yournum> port 514 | grep < ipaddress>

The destination syslog is on the host that the podman-compose setup is running on. It is a Red Hat 8.7 Linux box. So the Logstash container is trying to send syslog info to the host machine's syslog running on that TCP port.

I was hoping the .conf file was right. Now I need to decipher with the debug info mentioned early what is going on and why.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.