Logstash start up issues


#1

Hi All,

I just set up a 3 node Elasticsearch and Logstash environment using the repos on RHEL 7:

logstash-5.5.2-1.noarch
elasticsearch-5.5.2-1.noarch

I initially installed the X-Pack for ES and Kibana but then removed the plugins.

My problem is with starting Logstash on the 3 nodes. On two of the three nodes (node 1 and 3) I can start Logstash only via the command line with:

./logstash -f /etc/logstash/conf.d -l /var/log/logstash --log.level=debug --path.settings=/etc/logstash

Everything starts properly, logstash-plain.log is created, and the server begins to listen on port 5044.

I cannot however start Logstash properly via systemd (systemctl start logstash). When I attempt to start via systemd the 'logstash' process appears to start:

logstash 40191 1 99 22:24 ? 00:00:01 /usr/bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+DisableExplicitGC -Djava.awt.headless=true -Dfile.encoding=UTF-8 -XX:+HeapDumpOnOutOfMemoryError -Xmx1g -Xms256m -Xss2048k -Djffi.boot.library.path=/usr/share/logstash/vendor/jruby/lib/jni -Xbootclasspath/a:/usr/share/logstash/vendor/jruby/lib/jruby.jar -classpath : -Djruby.home=/usr/share/logstash/vendor/jruby -Djruby.lib=/usr/share/logstash/vendor/jruby/lib -Djruby.script=jruby -Djruby.shell=/bin/sh org.jruby.Main /usr/share/logstash/lib/bootstrap/environment.rb logstash/runner.rb --path.settings /etc/logstash

However, no log file is created in /var/log/logstash (as directed in logstash.yml: path.logs: /var/log/logstash), and the server does not begin listening on port 5044.

I have seen a couple posts a similar issue but no real resolution other than reinstalling the ELK stack.

On my other node (node 2), I cannot start Logstash either from the command line (same syntax as above) or systemd. It is having a problem finding java:

Could not find any executable java binary. Please install java in your PATH or set JAVA_HOME

I have also read several posts specific to this error and have confirmed JAVACMD in /etc/default/logstash:

JAVACMD="/usr/bin/java"
LS_HOME="/usr/share/logstash"
LS_SETTINGS_DIR="/etc/logstash"
LS_PIDFILE="/var/run/logstash.pid"
LS_USER="logstash"
LS_GROUP="logstash"
LS_GC_LOG_FILE="/var/log/logstash/gc.log"
LS_OPEN_FILES="16384"
LS_NICE="19"
SERVICE_NAME="logstash"
SERVICE_DESCRIPTION="logstash"

Java is indeed located at /usr/bin/java:

/usr/bin/java -version
openjdk version "1.8.0_141"
OpenJDK Runtime Environment (build 1.8.0_141-b16)
OpenJDK 64-Bit Server VM (build 25.141-b16, mixed mode)

I wasn't sure if I should separate these two issues or not, I can do so if it's preferred.

Any guidance is greatly appreciated.

TIA,

HB


(Aaron Mildenstein) #2

The two most common reasons for Logstash not starting from systemctl are probably permissions, and improper indentation in the logstash.yml file. Because it is YAML, even a single leading space before a key can result in YAML interpreting the key as a sub-value instead of a root-level one. So first, ensure the logstash.yml file has proper indentation (or non-indentation as the case may be), and then double-check that the logstash user has read/write permissions for the data path, and at least read/execute privs for all others it needs.

With regards to the node which seemingly cannot find java, /etc/default/logstash will likely only be used by systemd, so the settings there may be moot. Are there any other log entries besides Could not find any executable java binary...? They may be instructive. What about providing a full path to the logstash binary? e.g.:

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d -l /var/log/logstash --log.level=debug --path.settings=/etc/logstash

#3

Aaron - Thanks for the reply.

I will check the logstash.yml formatting and permissions. Quick question, if the logstash.yml has formatting issues, would it only effect the systemctl start up and not a command line start?

Thanks,

HB


(Aaron Mildenstein) #4

Yes, potentially, though specifying --path.settings=/etc/logstash will point it to the logstash.yml in that way. If --path.settings is omitted from a command-line execution, and it does not find it in $LS_HOME/config (and $LS_HOME is determined at launch time, relative to where the binary is), then it will simply use default values, instead.


#5

Thanks Aaron.

The logstash.yml that I'm using is extremely simple:

node.name: ${HOSTNAME}

path.data: /data/logstash

path.config: /etc/logstash/conf.d

log.level: debug

path.logs: /var/log/logstash

The permissions on the /data directory:

pwd; ll data
/
total 0
drwxr-xr-x 3 elasticsearch elasticsearch 19 Aug 18 10:44 elasticsearch
drwxrwxr-x 2 hadoop hadoop 6 Aug 16 11:26 files
drwxr-xr-x 4 logstash logstash 69 Aug 28 21:32 logstash
drwxrwxr-x 2 hadoop hadoop 6 Aug 16 11:26 metadata
drwxrwxr-x 4 hadoop hadoop 37 Aug 16 23:53 tmp
ll data/logstash/
total 4
drwxr-xr-x 2 logstash logstash 6 Aug 28 21:32 dead_letter_queue
drwxr-xr-x 2 logstash logstash 6 Aug 28 21:32 queue
-rw-r--r-- 1 logstash logstash 36 Aug 28 21:32 uuid

Regarding node 2 and the not finding java, I receive the same error when using the full path to logstash:

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d -l /var/log/logstash --log.level=debug --path.settings=/etc/logstash
Could not find any executable java binary. Please install java in your PATH or set JAVA_HOME.

Just as with the attempted starts with systemd on nodes 1/3, there is no log file created in /var/log/logstash. Is there a way to cause a more detailed output to the console?

thanks,

HB


(Aaron Mildenstein) #6

Remove the -l /var/log/logstash. Without it, it should log to STDOUT. For testing, I would also omit --path.settings=/etc/logstash, leaving the line at:

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d --log.level=debug

What files are in /etc/logstash/conf.d?


#7

I received the same omitting the -l /var/log/logstash and path.settings:

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d --log.level=debug
Could not find any executable java binary. Please install java in your PATH or set JAVA_HOME.

I have just one IIS log yml file that I used to use successfully with logstash 5.2. I needed to update the geoip database and then it started to work successfully starting from the command line on nodes 1/3:

input {

beats {

port => 5044

}

}

filter {

if [message] =~ "^#" {

drop {}

}

grok {

match => ["message", "%{TIMESTAMP_ISO8601:log_timestamp} %{IP:serverIP} %{WORD:method} %{URIPATH:uriStem} %{NOTSPACE:uriQuery} %{NUMBER:port} %{NOTSPACE:username} %{IPORHOST:clientIP} %{NOTSPACE:userAgent} %{NUMBER:response} %{NUMBER:subresponse} %{NUMBER:win32response} %{NUMBER:timetaken}"]

}

date {

match => [ "timestamp", "YYYY-MM-dd HH:mm:ss" ]

}

useragent {

source => "agent"

}

geoip {

source => "clientIP"
target => "geoip"
database => "/etc/logstash/GeoLite2-City.mmdb"
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]

}

mutate {

convert => [ "[geoip][coordinates]", "float"]

}

}

output {

elasticsearch {

hosts => ["node1:9200", "node2:9200", "node3:9200"]
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"

}

#stdout {codec => rubydebug}

}


(Aaron Mildenstein) #8

These 3 nodes are identical hardware, OS, and everything? If so, something doesn't seem like it's configured correctly.

What is the output of env? I ask, because these lines are how Logstash finds java, and if it can't, it gives that error message. The only way it won't find /usr/bin/java, which is ostensibly in the default path, is if there is somehow a JAVA_HOME or JAVACMD environment variable hiding somewhere among the environment variables, and no java there.


#9

Aaron, that was it on the java..

I found that root was indeed obtaining a JAVA_HOME variable from another env file. After removing the variable, node 2 now also starts successfully from the command line. Thank you for the guidance.

At this point all three nodes will start successfully from the command line, however none will start with systemd. Could this have anything to do with the fact that I did the x-pack installs on ES and Kibana and then removed the plugins? Any other thoughts on what might be causing the issue?

Thanks again,

HB


(Aaron Mildenstein) #10

It shouldn't have any relation to X-Pack. However, if there were weird JAVA_HOME issues, there could be something with that.

I would edit /etc/logstash/startup.options carefully, as well as /etc/logstash/jvm.options and then run:

/usr/share/logstash/bin/system-install

This will re-install the systemd service for Logstash for you, with the settings in the aforementioned files


#11

Thank you Aaron.

I did the system-install's to recreate the systemd service. Unfortunately, I am still unable to start Logstash via systemd.

The behavior is; The logstash process starts successfully, no log file is created in the designated /var/log/logstash folder, and port 5044 is not opened for listening.

I have confirmed the environment for the root user on all 3 nodes and there are no oddities. I'm not sure if it would be relevant but I am successfully running and starting Elasticsearch via systemd on these same 3 nodes in a cluster.

I'm curious as to why it's just the unsuccessful opening of the port 5044 for listening. As mentioned, I can successfully start Logstash via the command line (port opens for listening, but I very much want it to be controlled via systemd).

Any thoughts on why specifically the port doesn't open via systemd?

Again, thanks for any guidance..

HB


(Aaron Mildenstein) #12

How do you know that Logstash is starting successfully? What's the output of systemctl status logstash?

# systemctl status logstash
* logstash.service - logstash
   Loaded: loaded (/etc/systemd/system/logstash.service; disabled; vendor preset: enabled)
   Active: active (running) since Fri 2017-08-18 09:36:04 MDT; 1 weeks 5 days ago
 Main PID: 6993 (java)
   CGroup: /system.slice/logstash.service
           `-6993 /usr/bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+DisableExplicitGC -Djava.awt.headless=true -Dfile.encoding=UTF-8 -XX:+HeapDumpOnOutOfMemoryError -Xmx153

Aug 18 09:36:04 logstash systemd[1]: Started logstash.
Aug 18 09:36:09 logstash logstash[6993]: ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
Aug 18 09:36:10 logstash logstash[6993]: Sending Logstash's logs to /var/log/logstash/input which is now configured via log4j2.properties

(This is my setup, with two instances, input and output, so your output may vary).


#13

Sorry I should have been more clear. I should have said the logstash "process" seems to start successfully:

[root@node3]# systemctl start logstash

[root@node3]# systemctl status logstash

\u25cf logstash.service - logstash
Loaded: loaded (/etc/systemd/system/logstash.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2017-08-30 10:37:18 PDT; 9s ago
Main PID: 56348 (java)
CGroup: /system.slice/logstash.service
\u2514\u250056348 /usr/bin/java -Xmx500m -Xss2048k -Djffi.boot.library.path=/usr/share/logstash/vendor/jruby/lib/jni -Xbootclasspath/a:/usr/share/logstas...

Aug 30 10:37:18 node3 systemd[1]: Started logstash.
Aug 30 10:37:18 node3 systemd[1]: Starting logstash...

[root@node3]# ps -ef |grep logstash

logstash 56381 1 99 10:37 ? 00:00:01 /usr/bin/java -Xmx500m -Xss2048k -Djffi.boot.library.path=/usr/share/logstash/vendor/jruby/lib/jni -Xbootclasspath/a:/usr/share/logstash/vendor/jruby/lib/jruby.jar -classpath : -Djruby.home=/usr/share/logstash/vendor/jruby -Djruby.lib=/usr/share/logstash/vendor/jruby/lib -Djruby.script=jruby -Djruby.shell=/bin/sh org.jruby.Main /usr/share/logstash/lib/bootstrap/environment.rb logstash/runner.rb --path.settings /etc/logstash

It does appear that despite the service saying 'Active', and the logstash process running, the status does just stay on 'Starting logstash...'

Maybe it is hung in some way?

Thanks,

HB


(Aaron Mildenstein) #14

Could be hung. I'm trying to think of how.

As a side note, perusing your config:

geoip {
  source => "clientIP"
  target => "geoip"
  database => "/etc/logstash/GeoLite2-City.mmdb"
  add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
  add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
}

mutate {
  convert => [ "[geoip][coordinates]", "float"]
}

This entire block should be this:

geoip {
  source => "clientIP"
  database => "/etc/logstash/GeoLite2-City.mmdb"
}

You don't need to make a coordinates subfield of geoip, as the geoip plugin automatically generates one called location that has the coordinates. The default Logstash template even has a mapping for these fields. It makes coordinates redundant. If you must have it called that, then just rename that field:

mutate {
  rename => { "[geoip][location]" => "[geoip][coordinates]" }
}

Back to the hanging: Do you have selinux or other security protocols in place that would prevent the logstash user from opening a port, inbound or outbound? This would explain why you can launch it as root, but not as logstash (which is what happens when it's run as a service).


#15

Thanks Aaron. I updated the geoip block.

No, these systems do not have selinux in enforcing mode, nor is firewalld running.

I am also investigating as to why the logstash user is unable to open the port.


#16

Hi Aaron - I still am unable to figure out why the 'logstash' user cannot open the ports via systemd. Did you ever come up with reason that might be the case?

Regarding your last post on my geoip block. I clicked on the link you gave to the default logstash template. It dawned on me that I am not naming the index logstash-* but rather filebeat-*. Would this nullify the suggested changes?

One of the reasons I ask is I am having difficulty creating a Coordinate Map as kibana is saying:

No Compatible Fields: The "filebeat-*" index pattern does not contain any of the following field types: geo_point

Thoughts?

Thanks,

HB


(system) #17

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.