Logstash process keeps dying

Hi, I have a couple of test instances of ELK running and I am running into a weird issue with Logstash.

The process keeps hanging and refuses to restart with this error:

ERROR logstash.agent - Cannot create pipeline {:reason=>"Expected one of #, input, filter, output at line 28, column 1 (byte 579) after # Settings file in YAML\n#\n# Settings can be specified either in hierarchical form, e.g.:\n#\n# pipeline:\n# batch:\n# size: 125\n# delay: 5\n#\n# Or as flat keys:\n#\n# pipeline.batch.size: 125\n# pipeline.batch.delay: 5\n#\n# ------------ Node identity ------------\n#\n# Use a descriptive name for the node:\n#\n# node.name: test\n#\n# If omitted the node name will default to the machine's host name\n#\n# ------------ Data path ------------------\n#\n# Which directory should be used by logstash and its plugins\n# for any persistent needs. Defaults to LOGSTASH_HOME/data\n#\n"}

The logstash.yml hasn't been touched since it was installed and runs successfully to start with before crapping out sometime later. This has happened on two separate boxes on a regular basis. I have checked for spurious characters etc and the files are clean.

So far the only solution that has got them going again is a complete uninstall and purge of all files followed by a reinstall. Obviously this is a zero point solution if we are going to put any trust in this system.

Any ideas?

And one more thing, it keeps creating a folder called '${sys:ls.logs}' which contains copies of the conf files within the /etc/logstash folder.

It looks like you have a logstash.yml stored along with the pipeline configuration files (/etc/logstash/conf.d or wherever you keep those files).

Hi. Thanks for the reply.

The logstash.yml file is in the /etc/logstash folder. The pipeline configuration files are separate in the conf.d folder.

If I glob the two together and feed that to logstash then it works fine. For some it just can't read the yml file after a time. And I can't figure out why. If I force-start it then it runs but I'm not getting anything through to Kibana or ES.

If I glob the two together and feed that to logstash then it works fine.

What, exactly, do you mean by this?

For some it just can’t read the yml file after a time.

The pipeline code shouldn't be reading logstash.yml at all.

If I concatenate the two files in the conf.d folder into one and pass that as a command line e.g.

/usr/share/logstash/bin/logsatsh -f /etc/logstash/conf.d/glob.conf

then logstash starts without a problem. If I run

/usr/share/logstash/bin/logstash -f /etc/logstash/logstash.yml

then it fails with the first error.

If I use service logstash start then it fails to start (even though it says it has started) but if I use service logstash force-start it does start but just doesn't actually do anything.

I've also just tried a complete fresh reinstall but it's throwing up the original error right from the start now.

Also I have a whole lot of entries from dmesg stating that logstash is being terminated with status code(s) 1,127,143...

Is this a permissions issue do you think?

Hi, can you please help me with this? Thank you

Hi, can you please help me with this? Thank you

@prydeep, please don't spam other threads with references to your own completely unrelated question.

If I chmod logstash.yml to 444 then it seems to work for a bit, but now it is doing the same with the files in conf.d e.g. suddenly not being able to parse them.

I've got around the issue by setting up a cron job that re-installs logstash every hour. Obviously we're not going to be able to use this a solution in production.

run strace on logstash and paste the output once it crashes

I've just reinstalled it and it can't read the logstash.yml file (the default install version):

10:42:49.453 [LogStash::Runner] ERROR logstash.agent - Cannot create pipeline {:reason=>"Expected one of #, input, filter, output at line 28, column 1 (byte 579) after # Settings file in YAML\n#\n# Settings can be specified either in hierarchical form, e.g.:\n#\n# pipeline:\n# batch:\n# size: 125\n# delay: 5\n#\n# Or as flat keys:\n#\n# pipeline.batch.size: 125\n# pipeline.batch.delay: 5\n#\n# ------------ Node identity ------------\n#\n# Use a descriptive name for the node:\n#\n# node.name: test\n#\n# If omitted the node name will default to the machine's host name\n#\n# ------------ Data path ------------------\n#\n# Which directory should be used by logstash and its plugins\n# for any persistent needs. Defaults to LOGSTASH_HOME/data\n#\n"}

The strace doesn't show anything in terms of memory issues etc. It's just that for some reason logstash develops an issue with the formatting of the YML.

If I cut the logstash.yml down to:

path.config: /etc/logstash/conf.d
path.data: /var/lib/logstash
path.logs: /var/log/logstash

I now get:

2017-08-15 11:30:34,131 main ERROR Unable to locate appender "${sys:ls.log.format}_rolling" for logger config "root"
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs to console
11:30:34.669 [LogStash::Runner] ERROR logstash.agent - Cannot create pipeline {:reason=>"Expected one of #, input, filter, output at line 1, column 1 (byte 1) after "}

as the error.

This is running it with:

/usr/share/logstash/bin/logstash -f /etc/logstash/logstash.yml

Hi!
I've the same issue... I think this is not related to logstash.yml or any file in conf.d. It has something todo with log4j2.

When I test my config it says:

$ /usr/share/logstash/bin/logstash -f /etc/logstash --config.test_and_exit
2017-09-08 17:10:42,115 main ERROR Unable to locate appender "${sys:ls.log.format}_rolling" for logger config "root"
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs to console
17:10:43.255 [LogStash::Runner] FATAL logstash.runner - The given configuration is invalid. Reason: Expected one of #, input, filter, output at line 6, column 1 (byte 132) after ## JVM configuration

By adding the parameter --path.settings I could resolve the warning about log4j2:

$ /usr/share/logstash/bin/logstash -f /etc/logstash --path.settings /etc/logstash --config.test_and_exit
2017-09-08 17:31:14,808 main ERROR Unable to locate appender "${sys:ls.log.format}_rolling" for logger config "root"
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties

But still the strange folder in /etc/logstash does not disappear:

[root@es logstash]# ll /etc/logstash
total 20
drwxrwxr-x. 2 root root   95 Sep  8 17:17 conf.d
-rw-r--r--. 1 root root 1738 Aug 14 15:12 jvm.options
-rw-r--r--. 1 root root 1334 Sep  8 16:02 log4j2.properties
-rw-r--r--. 1 root root 5660 Sep  1 14:49 logstash.yml
-rw-r--r--. 1 root root 1659 Aug 14 15:12 startup.options
drwxr-xr-x. 2 root root   46 Sep  8 17:34 ${sys:ls.logs}

EDIT:
I followed https://www.elastic.co/guide/en/logstash/current/advanced-pipeline.html to test a few things when I ran into these issues. Running logstash as service with systemctl just worked fine. So I tried a few things.
Here is what I changed:

  • You cannot pass the /etc/logstash/logstash.yml config file. If you want to load multiple pipeline config files just wildcard them. I wanted to run all, so I did not add any information after the "/" (-f /etc/logstash/conf.d/)

.

[root@es bin]# pwd
/usr/share/logstash/bin
[root@es bin]# ./logstash -f /etc/logstash/conf.d/ --config.reload.automatic
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Sending Logstash's logs to /usr/share/logstash/logs which is now configured via log4j2.properties

Yes, there are still errors but I can now test my pipelines. Logstash won't die anymore.

I hope this may help someone!

Also fixed.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.