Logstash process stops abruptly

I am using Logstash version 7.7.1 to parse some log files located on the same server.
Using file input filter, grok filter & Elasticsearch output

The process stops abruptly after some time, with nothing in the logs, even tried debug logging mode.
Earlier was getting jvm heap memory errors, post which I increased the heap memory.

Then I started getting the below error
A fatal error has been detected by the Java Runtime Environment:
siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 0x00000000000000a8

Not much details, but I tried increasing the stack size & that may have worked, because the error has gone now & Logstash runs much longer now.

Some of the log entries are pretty big in size, that made me tweak the above settings.
But now, its not even writing anything in logs or stdout, so am not sure what is causing it to fail. Please suggest on how can I debug this further

Hm, difficult to debug without your data, especially with this kind of error. Could you paste all the relevant logstash configs, e.g. the filter configs and ES output config?

You say

But now, its not even writing anything in logs or stdout

OK - but are you getting log entries in the Elasticsearch output at least? So does it get through part of the logs before it slumps over?

Hi @Emanuil, I am getting the data in ES until the Logstash process runs. Then, the process itself is aborted, hence no data written to ES.

input{
        file{
        path => "/graylog/*/server.log.2020*"
        start_position => "beginning"
        exclude => "*.gz"
        }
}
filter{
if "SAMPLEDATA" in [message]{
        grok{
                match => {"message" => "%{TIMESTAMP_ISO8601:timest} %{LOGLEVEL}  %{DATA} %{DATA}:%{NUMBER} - %{DATA}TEST %{GREEDYDATA:jsondata}"}
                remove_field => ["message"]
        }
        if "_grokparsefailure" not in [tags]{
        date {
                match => ["timest", "yyyy-MM-dd HH:mm:ss"]
                timezone => "Asia/Colombo"
                target => "@timestamp"
        }

        json{
                source => "jsondata"
                target => "token"
        }
        split{
                field => "[token]"

        }
       mutate{
                add_field => {
                        "tid" => "%{[token][tid]}"
                        "pid" => "%{[token][pids]}"
                }
        }
       mutate{
                remove_field => ["jsondata","token"]
        }
}}
if "SAMPLEDATA2" in [message]{
        grok{
                match => {"message" => "%{TIMESTAMP_ISO8601:timest} %{LOGLEVEL}  %{DATA} %{DATA}:%{NUMBER} - %{GREEDYDATA:jsondata}"}
                remove_field => ["message"]
        }
        if "_grokparsefailure" not in [tags]{
        date {
                match => ["timest", "yyyy-MM-dd HH:mm:ss"]
                timezone => "Asia/Colombo"
                target => "@timestamp"
        }

        json{
                source => "jsondata"
                target => "token"
        }
        split{
                field => "[token]"

        }
        split{  field => "[token][pids]"   }

       mutate{
                add_field => {
                        "tid" => "%{[token][tid]}"
                        "pid" => "%{[token][pids]}"
                }
        }
       mutate{
                remove_field => ["jsondata","token"]
        }

}}
else{ drop{} }
}
output{
 if "_grokparsefailure" not in [tags] {
        elasticsearch { hosts => ["20.16.27.19:9200"]
                index => "pst-%{+YYYY.MM.dd}"
    }
 }
}

> indent preformatted text by 4 spaces

Hi @Emanuil, does the above filter details give insights into the issue ? Awaiting the diagnosis/solution for this problem

There should be a log file created by the JVM, something like "hs_err_pid18240.log" with more details on the crash.

What JVM version and OS are you running on?

A JVM crash like this is highly unusual. Typically, logstash logs prior to such a crash should provide some indications on something not right, make sure you look at all warning and error logs prior to the crash, these could provide some hints.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.