Hi there,
I've setup the following configuration to parse a WordPress debug.log file and then send to Google Cloud Logging.
input {
file {
path => ["/path/to/wp-content/debug.log"]
type => "debug"
}
}
filter {
if ( [type] == "debug" ) {
if( [message] =~ "^$") {
drop {}
}
multiline {
pattern => "(Stack trace:)|(^#.+)|(^\"\")|( thrown+)|(^\s)|(^\()|(^\))"
what => "previous"
}
grok {
match => { "message" => "%{GREEDYDATA} (\[%{LOGLEVEL:level}\]|PHP %{DATA:level}\:) %{GREEDYDATA}" }
}
if "_grokparsefailure" in [tags] {
mutate { add_field => [ "level", "EMPTY" ] }
}
mutate { lowercase => [ "level" ] }
mutate {
gsub => [
"message", "\"", "'"
]
}
}
}
output {
if ( [type] == "debug" ) {
exec {
command => 'gcloud preview logging --project my-project-id write myprojectlog-%{level} "%{message}" '
}
}
}
All it's works fine but sometimes the multiline instead of create an aggregate output to Google Cloud Logging separate in two or three groups:
This is the original log that logstash parse
In the /var/log/logstash/logstash.err sometimes I found
Exception: java.lang.ThreadDeath thrown from the UncaughtExceptionHandler in thread "Thread-253"
Any idea?
Thank you very much
I sometimes get these as well in logstash.err file. Not managed to figure out why, details below, the first line is alway there and appears not to be the root cause though I may be wrong:
cat /var/log/logstash/logstash.err
'[DEPRECATED] use `require 'concurrent'` instead of `require 'concurrent_ruby'`
Exception: java.lang.ThreadDeath thrown from the UncaughtExceptionHandler in thread "Thread-33"
Exception: java.lang.ThreadDeath thrown from the UncaughtExceptionHandler in thread "Thread-93"
Exception: java.lang.ThreadDeath thrown from the UncaughtExceptionHandler in thread "Thread-124"
Exception: java.lang.ThreadDeath thrown from the UncaughtExceptionHandler in thread "Thread-175"
Exception: java.lang.ThreadDeath thrown from the UncaughtExceptionHandler in thread "Thread-223"
My logstash.log file has nothing unusual in it.
Does anyone know what caused this or best way to troubleshoot this in a live enviroment?
1 Like
For me the issue appears to of been caused by running the centralised logstash indexer with multiline and an elasticsearch instance on the same server (though top and iotop did not show any issues). Resolution was seperating logstash and elasticsearch completely (turning off multiline also solved the problem but that wasn't a practical solution). Not sure how a person would identify that issue from a "java.lang.ThreadDeath" exception other than through process of elimination (google was not my friend on this one). Had the two Java applications seperate for a week now with the same load and otherwise same configurations on the same hardware and software combination and have not seen the issue again. Fingers crossed the problem is over for me now but I guess this solution isn't going to help you too much as you are using Google CLoud Logging instead of elasticsearch as your data store but perhaps this is just indicative of not enough system resources or some Java limitation (atleast for Java 8 anyway).