Unable to get logstash listening on custom port

Hello,

I need your help in order to get logstash listening on a custom port.

Context : I'm trying to install an ELK stack in order to push syslog on it.
For tests purpose, I installed elasticsearch and logstash on the same rhel server, using latest version (8.17).
I installed kibana on another server.

Status : Elasticsearch and kibana are running as services and connected.
logstash is also running as a service. My first pipeline works locally.

Problem : I'm unable to get logstash listening on a port I specified in a configuration file /etc/logstash/conf.d/syslog.conf. I commented few lines in the file below since a netstat command never shows the port 5140.
I specified this in my logstash.yml file :
path.config: /etc/logstash/conf.d/*.conf

Here is the content of my syslog.conf file :

input {
  tcp {
#    host => "0.0.0.0"
    port => 5140
    type => syslog
#    codec => cef
  }
  udp {
#    host => "0.0.0.0"
    port => 5140
    type => syslog
#    codec => cef
}

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp}
%{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}
(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}
output {
  elasticsearch {
    hosts => ["http://localhost:9200"]
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    user => "elastic_user"
    password => "elastic_password"
    stdout { codec => rubydebug }
  }
}

Please could you help me to get a solution to this problem ?

What is the error you get?

If logstash cannot bind to the specified port it would have an error in the logs, please check logstash logs and share them.

Thank you for this fast answer.

Not sure I will give you the right input.
Last logs in /var/log/logstash/logstash-plain.log are:

[2025-01-15T15:29:40,260][INFO ][logstash.runner          ] Log4j configuration path used is: /etc/logstash/log4j2.properties
[2025-01-15T15:29:40,266][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"8.17.0", "jruby.version"=>"jruby 9.4.9.0 (3.1.4) 2024-11-04 547c6b150e OpenJDK 64-Bit Server VM 21.0.5+11-LTS on 21.0.5+11-LTS +indy +jit [x86_64-linux]"}
[2025-01-15T15:29:40,268][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Dlogstash.jackson.stream-read-constraints.max-string-length=200000000, -Dlogstash.jackson.stream-read-constraints.max-number-length=10000, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED, -Dio.netty.allocator.maxOrder=11]
[2025-01-15T15:29:40,296][INFO ][org.logstash.jackson.StreamReadConstraintsUtil] Jackson default value override `logstash.jackson.stream-read-constraints.max-string-length` configured to `200000000`
[2025-01-15T15:29:40,296][INFO ][org.logstash.jackson.StreamReadConstraintsUtil] Jackson default value override `logstash.jackson.stream-read-constraints.max-number-length` configured to `10000`
[2025-01-15T15:29:40,321][FATAL][org.logstash.Logstash    ] Logstash stopped processing because of an error: (LoadError) Could not load FFI Provider: (NotImplementedError) FFI not available: null

See https://github.com/jruby/jruby/wiki/Native-Libraries#could-not-load-ffi-provider
org.jruby.exceptions.LoadError: (LoadError) Could not load FFI Provider: (NotImplementedError) FFI not available: null

See https://github.com/jruby/jruby/wiki/Native-Libraries#could-not-load-ffi-provider
        at org.jruby.ext.jruby.JRubyUtilLibrary.load_ext(org/jruby/ext/jruby/JRubyUtilLibrary.java:219) ~[jruby.jar:?]
        at RUBY.<main>(/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/ffi-1.17.0-java/lib/ffi.rb:11) ~[?:?]
        at org.jruby.RubyKernel.require(org/jruby/RubyKernel.java:1187) ~[jruby.jar:?]
        at RUBY.<module:LibC>(/usr/share/logstash/logstash-core/lib/logstash/util/prctl.rb:19) ~[?:?]
        at RUBY.<main>(/usr/share/logstash/logstash-core/lib/logstash/util/prctl.rb:18) ~[?:?]
        at org.jruby.RubyKernel.require(org/jruby/RubyKernel.java:1187) ~[jruby.jar:?]
        at RUBY.set_thread_name(/usr/share/logstash/logstash-core/lib/logstash/util.rb:36) ~[?:?]
        at RUBY.execute(/usr/share/logstash/logstash-core/lib/logstash/runner.rb:393) ~[?:?]
        at RUBY.run(/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/clamp-1.0.1/lib/clamp/command.rb:68) ~[?:?]
        at RUBY.run(/usr/share/logstash/logstash-core/lib/logstash/runner.rb:298) ~[?:?]
        at RUBY.run(/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/clamp-1.0.1/lib/clamp/command.rb:133) ~[?:?]
        at usr.share.logstash.lib.bootstrap.environment.<main>(/usr/share/logstash/lib/bootstrap/environment.rb:89) ~[?:?]
Caused by: org.jruby.exceptions.NotImplementedError: (NotImplementedError) FFI not available: null

        ... 12 more

However I suppose it was due to other tests since systemctl status logstash seems to be fine:

[root@logstash]# systemctl status logstash
● logstash.service - logstash
   Loaded: loaded (/usr/lib/systemd/system/logstash.service; disabled; vendor preset: disabled)
   Active: active (running) since Wed 2025-01-15 18:03:07 CET; 5s ago
 Main PID: 440359 (java)
    Tasks: 19 (limit: 203716)
   Memory: 69.1M
   CGroup: /system.slice/logstash.service
           └─440359 /usr/share/logstash/jdk/bin/java -Xms1g -Xmx1g -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.compile.invokedynamic=true -XX:+Hea>

Jan 15 18:03:07 cnclelk12 systemd[1]: Started logstash.
Jan 15 18:03:07 cnclelk12 logstash[440359]: Using bundled JDK: /usr/share/logstash/jdk

It doesn't seem fine, the systemctl status logstash shows a uptime of 5s, and from the error you shared your logstash is not starting, I suspect that it is in a crash loop.

Check the rest of the log if you have more FATAL lines.

The issue you get in your log is related to a /tmp path mounted as noexec, can you confirm that your /tmp is mounted as noexec?

Just run mount | grep "/tmp" and share the result

1 Like

Right.
I just show that process was up for 18s.

[root@logstash]# mount | grep "/tmp"
tmpfs on /tmp type tmpfs (rw,nosuid,nodev,noexec,relatime,seclabel)

I handled it with elasticsearch using this:

[root@~]# mkdir /usr/share/elasticsearch/tmp
[root@~]# vi /etc/systemd/system/elasticsearch.service.d/override.conf
[Service]
Environment=ES_TMPDIR=/usr/share/elasticsearch/tmp

Could I use the same sort of trick for logstash, and how ?

In logstash you need to edit the jvm.options and add a -Djava.io.tmpdir with the tmp directory.

You need to create a directory for the logstash use as a tmp directory.

Something like:

-Djava.io.tmpdir=/usr/share/logstash/tmp

The path needs to have write permission for the logstash user as well.

2 Likes

Thank you very much.
I will try this tomorrow and will let you know the result.

The path needs to have write permission for the logstash user as well.

Could you tell me what are the access requirement for logstash , on which directories ?
By default I believe I found /etc/logstash like this and switched it to group logstash:
drwxr-xr-x. 3 root root 4096 Jan 15 17:49 logstash

You need to create it, so you can create it anywhere you like.

Do not change or create anything inside /etc, but create it anywhere else, like /opt/ or under /usr/lib/logstash.

You just need to make sure that the logstash user has permissions to it.

1 Like

Hello Leandro,

I did some changes, but there is still a problem. java tmp dir is probably not enough.

Details:
I checked the configuration file /etc/logstash/jvm.options. The -Djava.io.tmpdir was commented: #-Djava.io.tmpdir=$HOME

I created the directory /usr/share/logstash/tmp and gave write access to the whole logstash folder for logstash group (only root had rw- rights) :

mkdir /usr/share/logstash/tmp
chown -R root:logstash /usr/share/logstash/
chmod -R 775 /usr/share/logstash/

(I tried to give no exec access but it was not working so I putted 755)

Then I added the line below in /etc/logstash/jvm.options section:

# set the I/O temp directory
-Djava.io.tmpdir=/usr/share/logstash/tmp

However I still have an unstable logstash service. And no listening on port 5140.

[root@cnclelk12 ~]# systemctl status logstash.service
● logstash.service - logstash
   Loaded: loaded (/usr/lib/systemd/system/logstash.service; disabled; vendor preset: disabled)
   Active: active (running) since Thu 2025-01-16 11:40:19 CET; 29s ago
 Main PID: 637421 (java)
    Tasks: 28 (limit: 203716)
   Memory: 467.5M
   CGroup: /system.slice/logstash.service
           └─637421 /usr/share/logstash/jdk/bin/java -Xms1g -Xmx1g -Djava.io.tmpdir=/usr/share/logstash/tmp -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.compile.invokedynamic=true -XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=file:/dev/urandom -Dlog4j2.isThreadContextMapInheritable=true -Dlogstash.jackson.stream-read-constraints.max-string-length=200000000 -Dlogstash.jackson.stream-read-constraints.max-number-length=10000 -Djruby.regexp.interruptible=true -Djdk.io.File.enableADS=true --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED --add-opens=java.base/java.security=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.nio.channels=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.management/sun.management=ALL-UNNAMED -Dio.netty.allocator.maxOrder=11 -cp /usr/share/logstash/vendor/jruby/lib/jruby.jar:/usr/share/logstash/logstash-core/lib/jars/checker-qual-3.42.0.jar:/usr/share/logstash/logstash-core/lib/jars/commons-codec-1.17.0.jar:/usr/share/logstash/logstash-core/lib/jars/commons-compiler-3.1.0.jar:/usr/share/logstash/logstash-core/lib/jars/commons-logging-1.3.1.jar:/usr/share/logstash/logstash-core/lib/jars/error_prone_annotations-2.26.1.jar:/usr/share/logstash/logstash-core/lib/jars/failureaccess-1.0.2.jar:/usr/share/logstash/logstash-core/lib/jars/google-java-format-1.22.0.jar:/usr/share/logstash/logstash-core/lib/jars/guava-33.1.0-jre.jar:/usr/share/logstash/logstash-core/lib/jars/httpclient-4.5.14.jar:/usr/share/logstash/logstash-core/lib/jars/httpcore-4.4.16.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-annotations-2.16.2.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-core-2.16.2.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-databind-2.16.2.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-dataformat-cbor-2.16.2.jar:/usr/share/logstash/logstash-core/lib/jars/janino-3.1.0.jar:/usr/share/logstash/logstash-core/lib/jars/javassist-3.30.2-GA.jar:/usr/share/logstash/logstash-core/lib/jars/jsr305-3.0.2.jar:/usr/share/logstash/logstash-core/lib/jars/jvm-options-parser-8.17.0.jar:/usr/share/logstash/logstash-core/lib/jars/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-1.2-api-2.17.2.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-api-2.17.2.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-core-2.17.2.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-jcl-2.17.2.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-slf4j-impl-2.17.2.jar:/usr/share/logstash/logstash-core/lib/jars/logstash-core.jar:/usr/share/logstash/logstash-core/lib/jars/reflections-0.10.2.jar:/usr/share/logstash/logstash-core/lib/jars/slf4j-api-1.7.32.jar:/usr/share/logstash/logstash-core/lib/jars/snakeyaml-2.2.jar org.logstash.Logstash --path.settings /etc/logstash

Jan 16 11:40:19 cnclelk12 systemd[1]: Started logstash.
Jan 16 11:40:19 cnclelk12 logstash[637421]: Using bundled JDK: /usr/share/logstash/jdk

I have no new logs since yesterday in /var/log/logstash/logstash-plain.log.

Could you help again ?

What else you have besides this? You should have more logs, without it is not possible to know what is the issue.

Also, since this is a systemd service, also checks /var/log/messages or /var/log/syslog for any issues with the service starting.

1 Like

Here are the last logs

  • No /var/log/syslog file
  • /var/log/warn: (with debug mode activated)
Jan 16 11:51:35 server systemd[638767]: logstash.service: Failed at step EXEC spawning /usr/share/logstash/bin/logstash: Permission denied`
Jan 16 11:51:35 server systemd[1]: logstash.service: Failed with result 'exit-code'.

Those rights are applied:

-rwxrwxr-x. 1 root logstash 2149 Dec  5 01:20 /usr/share/logstash/bin/logstash
  • /var/log/messages:
Jan 16 13:47:27 server logstash[657837]: #011at org.logstash.Logstash.run(Logstash.java:183)
Jan 16 13:47:27 server logstash[657837]: #011at org.logstash.Logstash.main(Logstash.java:93)
Jan 16 13:47:27 server logstash[657837]: 2025-01-16 13:47:27,346 main ERROR Null object returned for RollingFile in Appenders.
Jan 16 13:47:27 server logstash[657837]: 2025-01-16 13:47:27,347 main ERROR Null object returned for RollingFile in Appenders.
Jan 16 13:47:27 server logstash[657837]: 2025-01-16 13:47:27,347 main ERROR Unable to locate appender "deprecation_plain_rolling" for logger config "org.logstash.deprecation"
Jan 16 13:47:27 server logstash[657837]: 2025-01-16 13:47:27,347 main ERROR Unable to locate appender "plain_rolling" for logger config "root"
Jan 16 13:47:27 server logstash[657837]: 2025-01-16 13:47:27,347 main ERROR Unable to locate appender "plain_rolling_slowlog" for logger config "slowlog"
Jan 16 13:47:27 server logstash[657837]: 2025-01-16 13:47:27,348 main ERROR Unable to locate appender "deprecation_plain_rolling" for logger config "deprecation"
Jan 16 13:47:27 server logstash[657837]: [2025-01-16T13:47:27,359][INFO ][logstash.runner          ] Log4j configuration path used is: /etc/logstash/log4j2.properties
Jan 16 13:47:27 server logstash[657837]: [2025-01-16T13:47:27,363][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"8.17.0", "jruby.version"=>"jruby 9.4.9.0 (3.1.4) 2024-11-04 547c6b150e OpenJDK 64-Bit Server VM 21.0.5+11-LTS on 21.0.5+11-LTS +indy +jit [x86_64-linux]"}
Jan 16 13:47:27 server logstash[657837]: [2025-01-16T13:47:27,364][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.io.tmpdir=/usr/share/logstash/tmp, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Dlogstash.jackson.stream-read-constraints.max-string-length=200000000, -Dlogstash.jackson.stream-read-constraints.max-number-length=10000, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED, -Dio.netty.allocator.maxOrder=11]
Jan 16 13:47:27 server logstash[657837]: [2025-01-16T13:47:27,391][INFO ][org.logstash.jackson.StreamReadConstraintsUtil] Jackson default value override `logstash.jackson.stream-read-constraints.max-string-length` configured to `200000000`
Jan 16 13:47:27 server logstash[657837]: [2025-01-16T13:47:27,391][INFO ][org.logstash.jackson.StreamReadConstraintsUtil] Jackson default value override `logstash.jackson.stream-read-constraints.max-number-length` configured to `10000`
Jan 16 13:47:27 server logstash[657837]: [2025-01-16T13:47:27,508][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
Jan 16 13:47:27 server logstash[657837]: #011at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:733)
Jan 16 13:47:27 server logstash[657837]: #011at org.jruby.ir.Compiler$1.load(Compiler.java:114)
Jan 16 13:47:27 server logstash[657837]: #011at org.jruby.Ruby.runScript(Ruby.java:1245)
Jan 16 13:47:27 server logstash[657837]: #011at org.jruby.Ruby.runNormally(Ruby.java:1157)
Jan 16 13:47:27 server logstash[657837]: #011at org.jruby.Ruby.runFromMain(Ruby.java:983)
Jan 16 13:47:27 server logstash[657837]: #011at org.logstash.Logstash.run(Logstash.java:183)
Jan 16 13:47:27 server logstash[657837]: #011at org.logstash.Logstash.main(Logstash.java:93)
Jan 16 13:47:27 server logstash[657837]: 2025-01-16 13:47:27,346 main ERROR Null object returned for RollingFile in Appenders.
Jan 16 13:47:27 server logstash[657837]: 2025-01-16 13:47:27,347 main ERROR Null object returned for RollingFile in Appenders.
Jan 16 13:47:27 server logstash[657837]: 2025-01-16 13:47:27,347 main ERROR Unable to locate appender "deprecation_plain_rolling" for logger config "org.logstash.deprecation"
Jan 16 13:47:27 server logstash[657837]: 2025-01-16 13:47:27,347 main ERROR Unable to locate appender "plain_rolling" for logger config "root"
Jan 16 13:47:27 server logstash[657837]: 2025-01-16 13:47:27,347 main ERROR Unable to locate appender "plain_rolling_slowlog" for logger config "slowlog"
Jan 16 13:47:27 server logstash[657837]: 2025-01-16 13:47:27,348 main ERROR Unable to locate appender "deprecation_plain_rolling" for logger config "deprecation"
Jan 16 13:47:27 server logstash[657837]: [2025-01-16T13:47:27,359][INFO ][logstash.runner          ] Log4j configuration path used is: /etc/logstash/log4j2.properties
Jan 16 13:47:27 server logstash[657837]: [2025-01-16T13:47:27,363][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"8.17.0", "jruby.version"=>"jruby 9.4.9.0 (3.1.4) 2024-11-04 547c6b150e OpenJDK 64-Bit Server VM 21.0.5+11-LTS on 21.0.5+11-LTS +indy +jit [x86_64-linux]"}
Jan 16 13:47:27 server logstash[657837]: [2025-01-16T13:47:27,364][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.io.tmpdir=/usr/share/logstash/tmp, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Dlogstash.jackson.stream-read-constraints.max-string-length=200000000, -Dlogstash.jackson.stream-read-constraints.max-number-length=10000, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED, -Dio.netty.allocator.maxOrder=11]
Jan 16 13:47:27 server logstash[657837]: [2025-01-16T13:47:27,391][INFO ][org.logstash.jackson.StreamReadConstraintsUtil] Jackson default value override `logstash.jackson.stream-read-constraints.max-string-length` configured to `200000000`
Jan 16 13:47:27 server logstash[657837]: [2025-01-16T13:47:27,391][INFO ][org.logstash.jackson.StreamReadConstraintsUtil] Jackson default value override `logstash.jackson.stream-read-constraints.max-number-length` configured to `10000`
Jan 16 13:47:27 server logstash[657837]: [2025-01-16T13:47:27,508][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
Jan 16 13:47:27 server logstash[657837]: [2025-01-16T13:47:27,832][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
Jan 16 13:47:27 server logstash[657837]: [2025-01-16T13:47:27,832][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
Jan 16 13:47:27 server logstash[657837]: [2025-01-16T13:47:27,952][ERROR][logstash.agent           ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \\t\\r\\n], \"#\", \"=>\" at line 19, column 6 (byte 298) after input {\n  tcp {\n    host => \"0.0.0.0\"\n    port => 5140\n    type => syslog\n    codec => cef\n  }\n  udp {\n    host => \"0.0.0.0\"\n    port => 5140\n    type => syslog\n    codec => cef\n}\n\nfilter {\n  if ", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:32:in `compile_imperative'", "org/logstash/execution/AbstractPipelineExt.java:294:in `initialize'", "org/logstash/execution/AbstractPipelineExt.java:227:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:47:in `initialize'", "org/jruby/RubyClass.java:949:in `new'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:50:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:420:in `block in converge_state'"]}
Jan 16 13:47:27 server logstash[657837]: [2025-01-16T13:47:27,952][ERROR][logstash.agent           ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \\t\\r\\n], \"#\", \"=>\" at line 19, column 6 (byte 298) after input {\n  tcp {\n    host => \"0.0.0.0\"\n    port => 5140\n    type => syslog\n    codec => cef\n  }\n  udp {\n    host => \"0.0.0.0\"\n    port => 5140\n    type => syslog\n    codec => cef\n}\n\nfilter {\n  if ", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:32:in `compile_imperative'", "org/logstash/execution/AbstractPipelineExt.java:294:in `initialize'", "org/logstash/execution/AbstractPipelineExt.java:227:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:47:in `initialize'", "org/jruby/RubyClass.java:949:in `new'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:50:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:420:in `block in converge_state'"]}
Jan 16 13:47:27 server logstash[657837]: [2025-01-16T13:47:27,963][INFO ][logstash.runner          ] Logstash shut down.
Jan 16 13:47:27 server logstash[657837]: [2025-01-16T13:47:27,963][INFO ][logstash.runner          ] Logstash shut down.
Jan 16 13:47:27 server logstash[657837]: [2025-01-16T13:47:27,966][FATAL][org.logstash.Logstash    ] Logstash stopped processing because of an error: (SystemExit) exit
Jan 16 13:47:27 server logstash[657837]: [2025-01-16T13:47:27,966][FATAL][org.logstash.Logstash    ] Logstash stopped processing because of an error: (SystemExit) exit
Jan 16 13:47:27 server logstash[657837]: org.jruby.exceptions.SystemExit: (SystemExit) exit
Jan 16 13:47:27 server logstash[657837]: org.jruby.exceptions.SystemExit: (SystemExit) exit
Jan 16 13:47:27 server logstash[657837]: #011at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:924) ~[jruby.jar:?]
Jan 16 13:47:27 server logstash[657837]: #011at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:924) ~[jruby.jar:?]
Jan 16 13:47:27 server logstash[657837]: #011at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:883) ~[jruby.jar:?]
Jan 16 13:47:27 server logstash[657837]: #011at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:883) ~[jruby.jar:?]
Jan 16 13:47:27 server logstash[657837]: #011at usr.share.logstash.lib.bootstrap.environment.<main>(/usr/share/logstash/lib/bootstrap/environment.rb:90) ~[?:?]
Jan 16 13:47:27 server logstash[657837]: #011at usr.share.logstash.lib.bootstrap.environment.<main>(/usr/share/logstash/lib/bootstrap/environment.rb:90) ~[?:?]
Jan 16 13:47:27 server systemd[1]: logstash.service: Main process exited, code=exited, status=1/FAILURE
Jan 16 13:47:27 server systemd[1]: logstash.service: Main process exited, code=exited, status=1/FAILURE
Jan 16 13:47:27 server systemd[1]: logstash.service: Failed with result 'exit-code'.
Jan 16 13:47:28 server systemd[1]: logstash.service: Service RestartSec=100ms expired, scheduling restart.
Jan 16 13:47:27 server systemd[1]: logstash.service: Failed with result 'exit-code'.
Jan 16 13:47:28 server systemd[1]: logstash.service: Service RestartSec=100ms expired, scheduling restart.
Jan 16 13:47:28 server systemd[1]: logstash.service: Scheduled restart job, restart counter is at 12.
Jan 16 13:47:28 server systemd[1]: logstash.service: Scheduled restart job, restart counter is at 12.
Jan 16 13:47:28 server systemd[1]: Stopped logstash.
Jan 16 13:47:28 server systemd[1]: Stopped logstash.
Jan 16 13:47:28 server systemd[1]: Started logstash.
Jan 16 13:47:28 server systemd[1]: Started logstash.
Jan 16 13:47:28 server logstash[657931]: Using bundled JDK: /usr/share/logstash/jdk
Jan 16 13:47:28 server logstash[657931]: Using bundled JDK: /usr/share/logstash/jdk

How are you starting logstash? This lines suggests that you are not using systemctl.

[2025-01-16T13:47:27,508][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified

Also, you have an issue in your configuration file:

Jan 16 13:47:27 server logstash[657837]: [2025-01-16T13:47:27,952][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \t\r\n], "#", "=>" at line 19, column 6 (byte 298) after input {\n tcp {\n host => "0.0.0.0"\n port => 5140\n type => syslog\n codec => cef\n }\n udp {\n host => "0.0.0.0"\n port => 5140\n type => syslog\n codec => cef\n}\n\nfilter {\n if ", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:32:in `compile_imperative'", "org/logstash/execution/AbstractPipelineExt.java:294:in `initialize'", "org/logstash/execution/AbstractPipelineExt.java:227:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:47:in `initialize'", "org/jruby/RubyClass.java:949:in `new'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:50:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:420:in `block in converge_state'"]}

It seems that you are not closing the input block, it is missing a closing bracket, validate the configuration as well.

1 Like

Thank you Leandro,

How are you starting logstash? This lines suggests that you are not using systemctl.

I confirm I'm using systemctl.
I stopped logstash using systemctl and checked running processes using ps aux => no logstash process is running

I checked another post from you about this (300852) and I had exactly the same problem since my logstash.yml contained:
path.config: /etc/logstash/conf.d , which is why the pipelines.yml is ignored....
=> no more error of this type by now. Thank you !

It seems that you are not closing the input block, it is missing a closing bracket, validate the configuration as well.

Correct, and sorry for this. I changed settings to add the missing bracket.
The new file is:

input {
  tcp {
    host => "0.0.0.0"
    port => 5140
    type => syslog
#    codec => cef
  }
  udp {
    host => "0.0.0.0"
    port => 5140
    type => syslog
#    codec => cef
  }
}

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp}
%{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}
(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}
output {
  elasticsearch {
    hosts => ["http://localhost:9200"]
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    user => "elastic"
    password => "xxxxxxxx"
    stdout { codec => rubydebug }
  }
}

I also found

  • A right problem with my path.logs directory /data/logstash that I corrected.
  • A cef codec problem in my syslog.conf file, so I commented this line to avoid it
    => On this purpose I saw several syslog input configurations using input tcp and/or udp, and others using a syslog input.
    Do you have any advice about the best configuration model ?

But now I got elasticsearch not listening on port 9200 anymore, and no more logstash port 5140 listening.

[WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://elastic:xxxxxx@localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::ClientProtocolException] localhost:9200 failed to respond"}

I will check the elasticsearch config, I believe I changed some rights access somewhere.

If you have any idea about this, please let me know.

I'll be back :wink:

Hello,

Morning update: Even if I didn't try sending logs to logstash, many things seem to be better now...
...but Kibana is broken on its dedicated server :frowning:

  • No right problem found
  • I worked on jvm.options in order to prefer IPv4
[root@server ~]# vi /etc/elasticsearch/jvm.options
	#Prefer ipv4
	-Djava.net.preferIPv4Stack=true

[root@server ~]# vi /etc/logstash/jvm.options
	#Prefer ipv4
	-Djava.net.preferIPv4Stack=true
  • local firewalld desactivated for tests purpose => ports are shown now
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:9200            0.0.0.0:*               LISTEN      23319/java
tcp        0      0 0.0.0.0:5044            0.0.0.0:*               LISTEN      23551/java
tcp        0      0 0.0.0.0:5140            0.0.0.0:*               LISTEN      23551/java
tcp        0      0 0.0.0.0:9300            0.0.0.0:*               LISTEN      23319/java
tcp        0      0 127.0.0.1:9600          0.0.0.0:*               LISTEN      23551/java
udp        0      0 0.0.0.0:5140            0.0.0.0:*                           23551/java

Problem is now on kibana :

● kibana.service - Kibana
   Loaded: loaded (/usr/lib/systemd/system/kibana.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2025-01-16 18:15:17 CET; 15h ago
     Docs: https://www.elastic.co
 Main PID: 3671 (node)
    Tasks: 11 (limit: 203716)
   Memory: 467.8M
   CGroup: /system.slice/kibana.service
           └─3671 /usr/share/kibana/bin/../node/glibc-217/bin/node /usr/share/kibana/bin/../src/cli/dist

Jan 17 10:09:17 cnclelk13 kibana[3671]: [2025-01-17T10:09:17.724+01:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. socket hang up - Local: 10.73.25.144:60556, Remote: 10.73.25.143:9200
Jan 17 10:09:26 cnclelk13 kibana[3671]: [2025-01-17T10:09:26.474+01:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. socket hang up - Local: 10.73.25.144:36108, Remote: 10.73.25.143:9200
Jan 17 10:09:33 cnclelk13 kibana[3671]: [2025-01-17T10:09:33.611+01:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. socket hang up - Local: 10.73.25.144:39084, Remote: 10.73.25.143:9200
Jan 17 10:09:41 cnclelk13 kibana[3671]: [2025-01-17T10:09:41.450+01:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. socket hang up - Local: 10.73.25.144:39148, Remote: 10.73.25.143:9200
Jan 17 10:09:48 cnclelk13 kibana[3671]: [2025-01-17T10:09:48.846+01:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. socket hang up - Local: 10.73.25.144:50976, Remote: 10.73.25.143:9200
Jan 17 10:09:56 cnclelk13 kibana[3671]: [2025-01-17T10:09:56.875+01:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. socket hang up - Local: 10.73.25.144:37830, Remote: 10.73.25.143:9200
Jan 17 10:10:03 cnclelk13 kibana[3671]: [2025-01-17T10:10:03.877+01:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. socket hang up - Local: 10.73.25.144:45336, Remote: 10.73.25.143:9200
Jan 17 10:10:11 cnclelk13 kibana[3671]: [2025-01-17T10:10:11.547+01:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. socket hang up - Local: 10.73.25.144:39570, Remote: 10.73.25.143:9200
Jan 17 10:10:20 cnclelk13 kibana[3671]: [2025-01-17T10:10:20.637+01:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. socket hang up - Local: 10.73.25.144:39664, Remote: 10.73.25.143:9200
Jan 17 10:10:29 cnclelk13 kibana[3671]: [2025-01-17T10:10:29.296+01:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. socket hang up - Local: 10.73.25.144:40018, Remote: 10.73.25.143:9200

cat /var/log/kibana/kibana.log displays:

{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.11.0"},"@timestamp":"2025-01-16T18:15:31.056+01:00","message":"Browser executable: /usr/share/kibana/node_modules/@kbn/screenshotting-plugin/chromium/headless_shell-linux_x64/headless_shell","log":{"level":"INFO","logger":"plugins.screenshotting.chromium"},"process":{"pid":3671,"uptime":13.31250234},"trace":{"id":"1337af35f45fc6f0bd241488b14244fb"},"transaction":{"id":"3a3e2045a1b41654"}}
version information from Elasticsearch nodes. socket hang up - Local: <kibana_ipv4>:39084, Remote: <elasticsearch_ipv4>:9200","log":{"level":"ERROR","logger":"elasticsearch-service"},"process":{"pid":3671,"uptime":57255.867702774},"trace":{"id":"1337af35f45fc6f0bd241488b14244fb"},"transaction":{"id":"3a3e2045a1b41654"}}

I suppose that relationship between my elasticsearch node and my kibana node is broken.
Do you have any advice ? What could have break this relationship ?
Should I rerun the approbation process ?

Afternoon update, hope I could find help here.
The stack is up, kibana is connected to elasticsearch.

The point is that I still have a connection error from logstash to elasticsearch. Both are running in the same VM.

[2025-01-17T17:13:22,377][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"localhost:9200 failed to respond", :exception=>Manticore::ClientProtocolException, :cause=>#<Java::OrgApacheHttp::NoHttpResponseException: localhost:9200 failed to respond>}
[2025-01-17T17:13:22,378][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::ClientProtocolException] localhost:9200 failed to respond"}

Any idea please ?

You need to check if your Elasticsearch is running without any issues.

The log is pretty clear, it cannot connect to your Elasticsearch, so it may not be running.

Check the Elasticsearch logs.

Your log line [quote="Plauda, post:14, topic:373239"]
Local: 10.73.25.144:40018, Remote: 10.73.25.143:9200
[/quote]

seems to say logstash is on 10.73.25.144 and elasticsearch is on 10.73.25.143. localhost should be 127.0.0.1. Something strange there....

What interfaces does this host have?

Also, the Elasticsearch output section includes "stdout", I don't see that as an option for that plugin. Is that causing confusion?

Thank for your answer.
The problem between kibana and Elasticsearch has been corrected, you maybe spotted the root cause but I just re-enrolled the kibana node to make it work.

Thank you very much for your support around this topic.

The last problem was due to xpack.security. I disabled it and all seems to be fine now. It will be a further step.

As a summary for others, I had to deal with:

  • /tmp/ settings because native directory was readonly and not writable for java
    => For elasticsearch: Adapt Environment=ES_TMPDIR in /etc/systemd/system/elasticsearch.service.d/override.conf to put a directory with elasticsearch RW access
    => For logstash: Adapt I/O temp directory -Djava.io.tmpdir= in /etc/logstash/jvm.options

  • Modify access rights for logstash and elasticsearch users in tmp and logs repositories, since directories were created for root:root

chown -R root:elasticsearch <path>
chmod 775 <path>
chown -R root:logstash <other_path>
chmod 775 <other_path>
  • Prefer ipv4 in jvm.options files /etc/elasticsearch/jvm.options and /etc/logstash/jvm.options
    -Djava.net.preferIPv4Stack=true

  • Configure /etc/logstash/conf.d/syslog.conf.
    => This configuration came from F5 forums and seemed corresponding to my needs for BIG-IP ASM logs.
    => The input and output sections below are working. However grok is still a problem but it will be another topic.
    => geoip as I found it was not including target line. It broked the service then I added the target line found in another thread

input {
  syslog {
    port => 5140
    codec => plain {
#      charset => "ISO-8859-1"
    }
  }
}
filter {
  grok {
    match => {
      "message" => [
        ",attack_type=\"%{DATA:attack_type}\"",
        ",blocking_exception_reason=\"%{DATA:blocking_exception_reason}\"",
        ",bot_anomalies=\"%{DATA:bot_anomalies}\"",
        ",bot_category=\"%{DATA:bot_category}\"",
        ",bot_signature_name=\"%{DATA:bot_signature_name}\"",
        ",client_application=\"%{DATA:client_application}\"",
        ",client_application_version=\"%{DATA:client_application_version}\"",
        ",client_class=\"%{DATA:client_class}\"",
        ",date_time=\"%{DATA:date_time}\"",
        ",dest_port=\"%{DATA:dest_port}\"",
        ",enforced_bot_anomalies=\"%{DATA:enforced_bot_anomalies}\"",
        ",grpc_method=\"%{DATA:grpc_method}\"",
        ",grpc_service=\"%{DATA:grpc_service}\"",
        ",ip_client=\"%{DATA:ip_client}\"",
        ",is_truncated=\"%{DATA:is_truncated}\"",
        ",method=\"%{DATA:method}\"",
        ",outcome=\"%{DATA:outcome}\"",
        ",outcome_reason=\"%{DATA:outcome_reason}\"",
        ",policy_name=\"%{DATA:policy_name}\"",
        ",protocol=\"%{DATA:protocol}\"",
        ",request_status=\"%{DATA:request_status}\"",
        ",request=\"%{DATA:request}\"",
        ",request_body_base64=\"%{DATA:request_body_base64}\"",
        ",response_code=\"%{DATA:response_code}\"",
        ",severity=\"%{DATA:severity}\"",
        ",sig_cves=\"%{DATA:sig_cves}\"",
        ",sig_ids=\"%{DATA:sig_ids}\"",
        ",sig_names=\"%{DATA:sig_names}\"",
        ",sig_set_names=\"%{DATA:sig_set_names}\"",
        ",src_port=\"%{DATA:src_port}\"",
        ",staged_sig_cves=\"%{DATA:staged_sig_cves}\"",
        ",staged_sig_ids=\"%{DATA:staged_sig_ids}\"",
        ",staged_sig_names=\"%{DATA:staged_sig_names}\"",
        ",staged_threat_campaign_names=\"%{DATA:staged_threat_campaign_names}\"",
        ",sub_violations=\"%{DATA:sub_violations}\"",
        ",support_id=\"%{DATA:support_id}\"",
        ",threat_campaign_names=\"%{DATA:threat_campaign_names}\"",
        ",unit_hostname=\"%{DATA:unit_hostname}\"",
        ",uri=\"%{DATA:uri}\"",
        ",violations=\"%{DATA:violations}\"",
        ",violation_details=\"%{DATA:violation_details_xml}\"",
        ",violation_rating=\"%{DATA:violation_rating}\"",
        ",vs_name=\"%{DATA:vs_name}\"",
        ",x_forwarded_for_header_value=\"%{DATA:x_forwarded_for_header_value}\""
      ]
    }
    break_on_match => false
  }
  if [violation_details_xml] != "N/A" {
    xml {
      source => "violation_details_xml"
      target => "violation_details"
    }
  }
  mutate {
    split => { "attack_type" => "," }
    split => { "sig_cves" => "," }
    split => { "sig_ids" => "," }
    split => { "sig_names" => "," }
    split => { "sig_set_names" => "," }
    split => { "staged_sig_cves" => "," }
    split => { "staged_sig_ids" => "," }
    split => { "staged_sig_names" => "," }
    split => { "staged_threat_campaign_names" => "," }
    split => { "sub_violations" => "," }
    split => { "threat_campaign_names" => "," }
    split => { "violations" => "," }
    remove_field => [
      "[violation_details][violation_masks]",
      "violation_details_xml",
      "message"
    ]
  }
  if [x_forwarded_for_header_value] != "N/A" {
    mutate { add_field => { "source_host" => "%{x_forwarded_for_header_value}"}}
  } else {
    mutate { add_field => { "source_host" => "%{ip_client}"}}
  }
  geoip {
    source => "source_host"
    target => "source_geo"
  }
  ruby {
      code => "
          require 'base64';

          data = event.get('[violation_details]');

          def check64(value)
            value.is_a?(String) && Base64.strict_encode64(Base64.decode64(value)) == value;
          end

          def iterate(key, i, event)
            if i.is_a?(Hash)
              i.each do |k, v|
                if v.is_a?(Hash) || v.is_a?(Array)
                  newkey = key + '[' + k + ']';
                  iterate(newkey, v, event)
                end
              end
            else if i.is_a?(Array)
              i.each do |v|
                    iterate(key, v, event)
              end
            else
              if check64(i)
                event.set(key, Base64.decode64(i))
              end
            end
          end
          end
          iterate('[violation_details_b64decoded]', data, event)
      "
    }
}
output {
  elasticsearch {
    hosts => ["http://server_ip:9200"]
    user => "some_user"
    password => "some_password"
    index => "logs-waf-dcb"
  }
}
  • disable xpack.security in /etc/elasticsearch/elasticsearch.yml. I consider this as a temporary setting.
xpack.security.enabled: false

xpack.security.http.ssl:
  enabled: false

xpack.security.transport.ssl:
  enabled: false
  • change URL used by kibana to access elasticsearch in /etc/kibana/kibana.yml to use http and not https. I consider this as a temporary setting.
    elasticsearch.hosts: ['http://elasticsearchserver:9200']

Hope it could help.

Feel free to provide any complementary information.