Logstash Refusing Connections After Uninstalling xpack


(Jack Arnold) #1

Hello all,

I've recently uninstalled xpack from my stack as my trial had expired - Elasticsearch & kibana are running fine, but logstash isn't receiving any data. Was working fine before I removed xpack.

Trying a curl command returns this error;

[root@elastic01 logstash]# curl 172.19.32.154:5044
curl: (7) couldn't connect to host

I've restarted the service, which made no difference. Following a machine reboot I am now getting the following error;

[root@elastic01 bin]# curl 172.19.32.154:5044
curl: (56) Failure when receiving data from the peer

My pipeline is configured like so;

input {
  beats {
    port => "5044"
  }
}

filter {
  grok {
#    match => { "message" => "" }
    match => { "message" => "%{GREEDYDATA}"}
  }
}

output {
  elasticsearch {
    hosts => [ "172.19.32.154" ]
#    user => "elastic"
#    password => "elastic"
  }
}

A couple lines are commented out because I was playing around with different things. Behaviour is the same with those lines uncommented.

Any help appreciated.

Thanks,
Jack


(Magnus B├Ąck) #2

Don't use curl. The beats plugin doesn't speak HTTP.

It appears Logstash is at least listening on port 5044. If you're not getting anything from Logstash, how do you know anyone is sending anything to it? Have you checked the logs of Logstash and whatever software (Filebeat?) that's sending to Logstash?


(Jack Arnold) #3

Hi Magnus, thanks for the response.

I was using curl because it was suggested to me previously - There's not much useful in the logs. Filebeat was sending data to logstash which was showing up in kibana prior to the removal of xpack.
Here is an excerpt from my stdout and stderr log files, log level is set to 'DEBUG'

logstash-stderr.log

OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N

That one just repeats forever, nothing else in the file.

logstash-stdout.log

[FATAL] 2018-06-14 11:20:17.262 [main] runner - An unexpected error occurred! {:error=>org.apache.logging.log4j.core.config.ConfigurationException: No name attribute provided for Logger elasticsearchoutput, :backtrace=>["org.apache.logging.log4j.core.config.properties.PropertiesConfigurationBuilder.createLogger(org/apache/logging/log4j/core/config/properties/PropertiesConfigurationBuilder.java:255)", "org.apache.logging.log4j.core.config.properties.PropertiesConfigurationBuilder.build(org/apache/logging/log4j/core/config/properties/PropertiesConfigurationBuilder.java:177)", "org.apache.logging.log4j.core.config.properties.PropertiesConfigurationFactory.getConfiguration(org/apache/logging/log4j/core/config/properties/PropertiesConfigurationFactory.java:52)", "org.apache.logging.log4j.core.config.properties.PropertiesConfigurationFactory.getConfiguration(org/apache/logging/log4j/core/config/properties/PropertiesConfigurationFactory.java:35)", "org.apache.logging.log4j.core.config.ConfigurationFactory.getConfiguration(org/apache/logging/log4j/core/config/ConfigurationFactory.java:239)", "org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(org/apache/logging/log4j/core/config/ConfigurationFactory.java:369)", "org.apache.logging.log4j.core.config.ConfigurationFactory.getConfiguration(org/apache/logging/log4j/core/config/ConfigurationFactory.java:260)", "org.apache.logging.log4j.core.LoggerContext.reconfigure(org/apache/logging/log4j/core/LoggerContext.java:613)", "org.apache.logging.log4j.core.LoggerContext.setConfigLocation(org/apache/logging/log4j/core/LoggerContext.java:603)", "java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:498)", "org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(org/jruby/javasupport/JavaMethod.java:453)", "org.jruby.javasupport.JavaMethod.invokeDirect(org/jruby/javasupport/JavaMethod.java:314)", "RUBY.block in reconfigure(/usr/share/logstash/logstash-core/lib/logstash/logging/logger.rb:84)", "org.jruby.ext.thread.Mutex.synchronize(org/jruby/ext/thread/Mutex.java:148)", "org.jruby.ext.thread.Mutex$INVOKER$i$0$0$synchronize.call(org/jruby/ext/thread/Mutex$INVOKER$i$0$0$synchronize.gen)", "RUBY.reconfigure(/usr/share/logstash/logstash-core/lib/logstash/logging/logger.rb:77)", "RUBY.execute(/usr/share/logstash/logstash-core/lib/logstash/runner.rb:239)", "RUBY.run(/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:67)", "RUBY.run(/usr/share/logstash/logstash-core/lib/logstash/runner.rb:219)", "RUBY.run(/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:132)", "usr.share.logstash.lib.bootstrap.environment.invokeOther55:run(usr/share/logstash/lib/bootstrap//usr/share/logstash/lib/bootstrap/environment.rb:67)", "usr.share.logstash.lib.bootstrap.environment.<main>(/usr/share/logstash/lib/bootstrap/environment.rb:67)", "java.lang.invoke.MethodHandle.invokeWithArguments(java/lang/invoke/MethodHandle.java:627)", "org.jruby.Ruby.runScript(org/jruby/Ruby.java:828)", "org.jruby.Ruby.runNormally(org/jruby/Ruby.java:747)", "org.jruby.Ruby.runNormally(org/jruby/Ruby.java:765)", "org.jruby.Ruby.runFromMain(org/jruby/Ruby.java:578)", "org.logstash.Logstash.run(org/logstash/Logstash.java:81)", "org.logstash.Logstash.main(org/logstash/Logstash.java:45)"]}
[ERROR] 2018-06-14 11:20:17.280 [main] Logstash - java.lang.IllegalStateException: org.jruby.exceptions.RaiseException: (SystemExit) exit
Sending Logstash's logs to /usr/share/logstash/logs which is now configured via log4j2.properties
Sending Logstash's logs to /usr/share/logstash/logs which is now configured via log4j2.properties

I fixed the errors as you can see from the latest run of logstash, it starts up without issue. Filebeat is set to the default log level & I can see the harvesters have started for each of my log files. There aren't any errors appearing in the log, although I can turn up the log level and take a look if this doesn't help.

The above holds true for all my instances of filebeat - I have 3 servers running filebeat which were all sending log files/metrics correctly, before the removal of xpack.
bin/logstash-plugin remove x-pack --purge is the command I used. I did the same respectively for Kibana and Elasticsearch which are both working as expected.

I was just checking a few things before posting this, and weirdly enough, it's just started working - An index has just appeared in elasticsearch as of about 20 minutes ago. The strange thing is I haven't changed anything since before my lunch which was about an hour and a half ago, so why would it start working? Checking the command history it's literally just tail commands, so I have no idea why it would suddenly start working. I've had this a few times now with logstash, where it breaks, I'm unable to find anything in the logs and then a day or two later it comes back up again, with seemingly no changes.

Is there anything I could do to try to troubleshoot this?


(system) #4

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.