Could not start Logstash 5.0 as a service on CentOS 7

I am running CentOS 7. I had followed the instruction and used yum to install:

I put my conf file at /etc/logstash/conf.d. Launching logstash manually from bash worked.

Then I tried to start Logstash as a service.

Here is what I did:
systemctl daemon-reload
systemctl enable logstash.service
systemctl start logstash.service

But these commands returned nothing. Nothing is written on the log file as well.

Does anyone have an idea how to make it work?


I'm experiencing a similar problem in Ubuntu 16.04. Logstash works just fine when I run it manually from bin/logstash and specify the config file, but doesn't fully launch when I run it as a service.

#My configuration file under, /etc/logstash/conf.d/central.conf:

input {
tcp {
port => 5000
type => syslog
udp {
port => 5000
type => syslog

filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]

output {
elasticsearch { hosts => [""] }
stdout { codec => rubydebug }

#Here are the folder permissions:

~# ls -lsh /etc/logstash/
total 28K
4.0K drwxrwxr-x 2 root root 4.0K Nov 29 22:58 conf.d
4.0K -rw-rw-r-- 1 root root 1.7K Nov 24 10:22 jvm.options
4.0K -rw-rw-r-- 1 root root 1.4K Nov 24 10:22
4.0K -rw-rw-r-- 1 logstash logstash 710 Nov 29 22:23 logstash.yml
4.0K -rw-rw-r-- 1 root root 1.7K Nov 24 10:22 startup.options

#And the configuration file permissions:

~# ls -lsh /etc/logstash/conf.d/
total 4.0K
4.0K -rw-rw-r-- 1 logstash logstash 1.2K Nov 29 21:31 central.conf

#Debug log output from /var/log/logstash/logstash-plain.log when starting logstash as a service:

tail: cannot open '100' for reading: No such file or directory
==> /var/log/logstash/logstash-plain.log <==
[2016-11-30T16:01:00,717][DEBUG][logstash.outputs.elasticsearch] config LogStash::Outputs::ElasticSearch/@resurrect_delay = 5
[2016-11-30T16:01:00,717][DEBUG][logstash.outputs.elasticsearch] config LogStash::Outputs::ElasticSearch/@validate_after_inactivity = 10000
[2016-11-30T16:01:00,724][DEBUG][logstash.codecs.rubydebug] config LogStash::Codecs::RubyDebug/@id = "rubydebug_5b0911ca-81d2-45ac-b7e7-9d532c4cd66e"
[2016-11-30T16:01:00,724][DEBUG][logstash.codecs.rubydebug] config LogStash::Codecs::RubyDebug/@enable_metric = true
[2016-11-30T16:01:00,724][DEBUG][logstash.codecs.rubydebug] config LogStash::Codecs::RubyDebug/@metadata = false
[2016-11-30T16:01:00,761][DEBUG][logstash.outputs.stdout ] config LogStash::Outputs::Stdout/@codec = <LogStash::Codecs::RubyDebug id=>"rubydebug_5b0911ca-81d2-45ac-b7e7-9d532c4cd66e", enable_metric=>true, metadata=>false>
[2016-11-30T16:01:00,761][DEBUG][logstash.outputs.stdout ] config LogStash::Outputs::Stdout/@id = "52afc8ad01e565bac68d42f46ca7fefa6a9fb53a-7"
[2016-11-30T16:01:00,761][DEBUG][logstash.outputs.stdout ] config LogStash::Outputs::Stdout/@enable_metric = true
[2016-11-30T16:01:00,761][DEBUG][logstash.outputs.stdout ] config LogStash::Outputs::Stdout/@workers = 1
[2016-11-30T16:01:00,762][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash

Also just to note, I have UFW and AppArmor disabled (didn't seem to make a difference either way).

I'm having a similar issue. When I run logstash as a binary with command line flags it works perfect but when i run it as a service nothing works. I am running elastic stack 5.1.1 on ubuntu 16.04. has any one solve this issue yet? I tried changing all the permission for all files and directories and it did not work.

I had resolved this after about a few hours on my initial issue. From what I had remembered, the service failed to start because logstash failed to locate Java when running in the service account. I believed that I had to do additional setup to get it find Java.

Update: I found what was causing my issue, but don't quite know the root cause at the moment.

I'm using SaltStack to deploy Logstash which can also manage the service using its systemd service state module. Here's an example of my Salt state's init script:

- name: logstash-service
- enable: True

Whenever I deploy Logstash using this method the service wouldn't successfully run in the background. But if I changed my Salt state to start the service using a cmd state module (essentially just executes a bash command when the script runs), Logstash starts and runs just fine. Here's what that looks like in Salt:

- name: systemctl start logstash.service

- name: systemctl enable logstash.service

Both of the cmd and service states can accomplish the same thing, but the using the service state module is much cleaner and preferred.

To note, I'm using SaltStack along with the service state module to deploy and manage both Kibana and Elasticsearch as well (along with hundreds of other services in my environment) and have never seen this problem before. In fact, I didn't see this problem until version 5.1.X (it worked fine in versions 4.X.X and 5.0.X). So I'm not sure if it's an issue with Salt specifically or the Logstash systemd unit files.

So if anyone is having this problem using SaltStack, report back your findings (such as your version of Logstash) and maybe we can narrow this down some more.

Edit: I should note that after attempting to run the Logstash service using the service state module, it would "break" the unit files and the service wouldn't run in the background without doing a complete reinstall.