Logstash error after updating to 7.9.1

[2020-11-13T16:36:21,302][ERROR][logstash.agent ] Failed to execute action {:id=>:default, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create, action_result: false", :backtrace=>nil}

please what does this mean ?

Increase the log.level to debug and see if you get a more informative error message.

hi below is what i got

[2020-11-13T20:14:38,120][ERROR][logstash.agent ] Failed to execute action {:id=>:default, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create, action_result: false", :backtrace=>nil}
[2020-11-13T20:14:38,612][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2020-11-13T20:14:43,377][INFO ][logstash.runner ] Logstash shut down.
[2020-11-13T20:14:43,377][ERROR][org.logstash.Logstash ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
2020-11-13 20:14:43,378 pool-1-thread-1 DEBUG Stopping LoggerContext[name=277050dc, org.apache.logging.log4j.core.LoggerContext@6b21a869]
2020-11-13 20:14:43,378 pool-1-thread-1 DEBUG Stopping LoggerContext[name=277050dc, org.apache.logging.log4j.core.LoggerContext@6b21a869]...
2020-11-13 20:14:43,399 pool-1-thread-1 DEBUG Appender plain_console stopped with status true
2020-11-13 20:14:43,399 pool-1-thread-1 DEBUG Shutting down OutputStreamManager SYSTEM_OUT.false.false
2020-11-13 20:14:43,399 pool-1-thread-1 DEBUG OutputStream closed
2020-11-13 20:14:43,399 pool-1-thread-1 DEBUG Shut down OutputStreamManager SYSTEM_OUT.false.false, all resources released: true
2020-11-13 20:14:43,399 pool-1-thread-1 DEBUG Appender json_console stopped with status true
2020-11-13 20:14:43,330 pool-1-thread-1 DEBUG Stopped org.apache.logging.log4j.core.config.properties.PropertiesConfiguration@687fa4d0 OK
2020-11-13 20:14:43,330 pool-1-thread-1 DEBUG Stopped LoggerContext[name=277050dc, org.apache.logging.log4j.core.LoggerContext@6b21a869] with status true

The additional messages are not helpful. What does your configuration look like?

here is my pipeline config

input {
#beats - all apps using filebeat logshipper
beats {
port => 7054
ssl => true
ssl_certificate_authorities => ["cert"]
ssl_certificate => "cert"
ssl_key => "key"
ssl_verify_mode => "peer"
client_inactivity_timeout => "006700"
}
}
filter {

grok {
match => { "message" => "%{NUMBER:num:float} %{LOGLEVEL:loglevel} [%{DATA:class}]%{GREEDYDATA:message}" }
}
if "ERROR" not in [loglevel] {
drop {}
}
mutate
{
add_field => {"attlogstashtracker_appcode" => "2pac"}
}
}
output {
if "2pac-log-windoslogs" in [tags]{
elasticsearch {
hosts => "host"
user => "{user112}" password => "{pass112}"
index => "location-%{+YYYY.MM}"
manage_template => false
ssl => true
ssl_certificate_verification => false
cacert => "/usr/share/logstash/config/cert"
}
}
stdout {
codec => rubydebug
}
}

I would try setting

ssl => false

on the beat input, and commenting out all the other ssl options in the input. See if logstash will then start (if it does you will likely get a lot of $InvalidFrameProtocolException: Invalid Frame Type, received exceptions, and obviously you will not get events). If it does start then you have a problem with your certificates or keys (missing private key perhaps?)

Then try the same for the elasticsearch output. Again, you will not get any data into elasticsearch, it is just a way of testing whether the ssl configuration is causing the problem.

awesome thanks so much it started. i will check the certs and keys again

Hi Badger,

i have fixed the cert but I'm getting another set of errors please see below.

[2020-11-16T18:14:39,338][INFO ][org.logstash.beats.BeatsHandler] [local: 0.0.0.0:5506, remote: 10.196.5789.] Handling exception: Connection reset by peer
[2020-11-16T18:14:39,338][WARN ][io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[?:?]
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) ~[?:?]
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:276) ~[?:?]
at sun.nio.ch.IOUtil.read(IOUtil.java:233) ~[?:?]
at sun.nio.ch.IOUtil.read(IOUtil.java:223) ~[?:?]
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:358) ~[?:?]
at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) ~[netty-all-4.1.49.Final.jar:4.1.49.Final]
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) ~[netty-all-4.1.49.Final.jar:4.1.49.Final]
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) ~[netty-all-4.1.49.Final.jar:4.1.49.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) ~[netty-all-4.1.49.Final.jar:4.1.49.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) ~[netty-all-4.1.49.Final.jar:4.1.49.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) ~[netty-all-4.1.49.Final.jar:4.1.49.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) ~[netty-all-4.1.49.Final.jar:4.1.49.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-all-4.1.49.Final.jar:4.1.49.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-all-4.1.49.Final.jar:4.1.49.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-all-4.1.49.Final.jar:4.1.49.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.49.Final.jar:4.1.49.Final]
at java.lang.Thread.run(Thread.java:834) [?:?]
[2020-11-16T18:17:02,839][INFO ][org.logstash.beats.BeatsHandler] [local: 0.0.0.0:5506, remote: 10.190.581425280] Handling exception: Connection reset by peer
[2020-11-16T18:17:02,839][WARN ][io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[?:?]
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) ~[?:?]

So the beat that connected disconnected. Does the log file of the beat give any indication of why?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.