Constantly repeating error: A plugin had an unrecoverable error. Will restart this plugin

We are getting the following error in the logs. We run fine for about 40 minutes then we start to get the following errors. It is unclear as to what went wrong.

[2017-11-09T15:03:38,289][INFO ][logstash.pipeline ] Pipeline main started
[2017-11-09T15:03:38,345][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2017-11-09T15:42:50,736][ERROR][logstash.pipeline ] A plugin had an unrecoverable error. Will restart this plugin.
Plugin: <LogStash::Inputs::Tcp port=>514, type=>"syslog", id=>"aa04873f48b834660f5dcfdc573533b3f3af07db-1", enable_metric=>true, codec=><LogStash::Codecs::Line id=>"line_e209eaa8-91f2-4462-821b-c16509daed64", enable_metric=>true, charset=>"UTF-8", delimiter=>"\n">, host=>"0.0.0.0", data_timeout=>-1, mode=>"server", proxy_protocol=>false, ssl_enable=>false, ssl_verify=>true, ssl_key_passphrase=>>
Error: problem when accepting

logstash.yml

The # character at the beginning of a line indicates a comment. Use

comments to describe your configuration.

input {
#file {
# path => "/opt/pki/syslog/messages"
#}
# beats {
# port => "5044"
# }
tcp {
port => "514"
type => syslog
}

}

The filter part of this file is commented out to indicate that it is

optional.

filter {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]

  # need it for alert queuering
  add_field => [ "datacenter", "cnj" ]
  add_field => [ "env",        "dev" ]
  add_field => [ "family",     "pki" ]
  add_field => [ "app",        "monitoring" ]
  add_field => [ "service",    "loggy" ]
  add_field => [ "component",  "logstash_1" ]
}
date {
  match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
}

if [syslog_hostname]{
    # the following fields are needed for hashing
   fingerprint {
       source => "syslog_hostname"
       target => "message_key_fingerprint"
       method => "MURMUR3"
       key => "Log analytics"
    }
    mutate {
       copy => { "message_key_fingerprint" => "message_key_int" }
    }
    mutate {
       convert => { "message_key_int" => "integer" }
    }
    mutate {
       copy => { "message_key_int" => "message_key" }
    }
    ruby {
       code => "event.set('message_key', event.get('message_key_int') % 10000 )"
    }
    mutate {
       convert => { "message_key" => "string" }
    }
    mutate {
       remove_field => [ "message", "message_key_int", "message_key_fingerprint" ]
    }

}
}
output {
# stdout { codec => rubydebug }
# elasticsearch {
# hosts => ["monatee-loggy-master-cnj.dev.bnymellon.net:80"]
# index => "monatee_loggy_cnj-%{+YYYY.MM.dd}"
#}

kafka {
  bootstrap_servers          => "rsomtapae182.bnymellon.net:9092,rsomtapae183.bnymellon.net:9092,rsomtapae184.bnymellon.net:9092"
  client_id                  => "r00j55n0c"
  topic_id                   => "monatee_loggy"
  jaas_path                  => "/opt/pki/logstash_1/config_kafka/kafka_client_jaas_logstash.conf"
  security_protocol          => "SASL_PLAINTEXT"
  sasl_kerberos_service_name => "kafka"
  sasl_mechanism             => "plain"
  # compression_type  => "snappy"
}

}

the log message is being generated every second and producing huge log files.

I am getting the same error. Did you find how to fix it, Albert?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.