Logstash hanged, cant be even restarted, Elastic output plugin might be the rascal?

Hi,

I've a configuration that reads from a file and gets its content to a data stream in Elastic.
My setup is like that:

I have a conf file under /etc/logstash/conf.d/ that listens on a port to get the logs and writes them to the file in question.

input {
    syslog {
      host => "127.0.0.1"
      port => 5005
    }
}

filter
{
        if [program] =~ "box_Firewall" {}
        else { drop{} }
}

output
        {
                stdout{}
                file
                {
                        path => "/var/log/barracuda-firewall.log"
                }
        }

I have a second conf file that I start manually using /usr/share/logstash/bin/logstash -f myconf.conf and this one is reading the file and get the events to Elastic.

input {
        file
             {
             path => "/var/log/barracuda-firewall.log"
             start_position => beginning
             }
        stdin{}
}

filter
{
        mutate { rename => { "[host]" => "[host][name]" } }
}

output
{
        elasticsearch
        {
                cloud_id => "XXX"
                cloud_auth => "XXX"
                index => "logs-barracuda-custom-default"
                ssl => true
                ssl_certificate_verification => false
                cacert => "/usr/local/share/ca-certificates/ca_elasticsearch.crt"
                http_compression => true
                manage_template => false
                action => "create"
                pipeline => "test1"
        }
}

Now, the first time I manually started the second conf, it worked and I stopped it to review m ylogs in Elastic that looks just exactly I wanted them, the second time I manually started the conf file something happened and I started receiving errors and Logstash seems to have freeze.

Currently, I cant even restart it using systemctl restart logstash as the command would just hang.
image

In the logs I can see the following ERRORS

Jan 18 20:03:47 daesoc01 logstash[2765162]: [2022-01-18T20:03:47,782][INFO ][logstash.outputs.elasticsearch][elastic][b1701058e8062b09183c724fca435cfc55d34ddef7a2704e2c79696cbd4d3d4e] Retrying failed action {:status=>500, :action=>[>"create", {:_id=>nil, :_index=>"logs-barracuda-custom-default", :routing=>nil, :pipeline=>"test1"}, {"@version"=>"1", "path"=>"/var/log/barracuda-firewall.log", "message"=>"{\"@version\":\"1\",\"logsource\":\"127.0.0.1\",\"facility\>

Jan 18 20:03:47 daesoc01 logstash[2765162]: [2022-01-18T20:03:47,782][INFO ][logstash.outputs.elasticsearch][elastic][b1701058e8062b09183c724fca435cfc55d34ddef7a2704e2c79696cbd4d3d4e] Retrying individual bulk actions that failed or were rejected by the previous bulk request {:count=>6}

This one is constanly showing up and I guess this is my proglem, but why is that and how to restart Logstash remains mystery tome, please help me out here :slight_smile:

[2022-01-18T20:25:07,941][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request {:count=>6}

Also the logstash service is on Deactivating state

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.