Why changing logstash filter config caused i/o timeout event

(Yuri Sibirski) #1

Hi there, i am really confused why changing my 10-syslog-filter.conf file caused i/o timeout event on all the clients and no data was saved. Initial config is:

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]

Then i changed on line if [type] == "log" that broke everything. The reason i changed it to log is because i can see my data collected on the client is marked and type=log thus i was hoping it will apply my filter and parse the way i need it. Maybe i am doing something wrong. Thank you in advance.

(Guy Boertje) #2

What version of Logstash?

Did you update any plugins?

Did the if block where [type] == "syslog" ever get executed?

What is your input config?

I seems to me that your clients are seeing back pressure but I can't tell for sure without more info.

(Yuri Sibirski) #3

Here are all the answeres:

What version of Logstash? 5.6.2

Did you update any plugins? I didn't update any plugins but i updated all the ELK components from 2. version to 5.6.2

Did the if block where [type] == "syslog" ever get executed? It doesn't look like. I have seen it being executed once when i changed type to LOG but then it worked for a day and broke with i/o timeout events on the clients. Also it looks to me if i set TYPE=SYSLOG it parses it with some sort of default filter and not the one that i specify in my filter file

What is your input config? Here is my input config

input {
  beats {
    port => 5044
    ssl => true
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"

And here is my output file

output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"

I can't really understand what causing clients to fail. All the other configurations are pretty much standard. Here is my filebeat config just in case:


  • input_type: log


    • /var/log/syslog
    • /var/log/auth.log

Thank you a lot in advance.

(Yuri Sibirski) #4

Extra information: bulk_max_size set to default 2048 and logstash is listening on IPV6

(Guy Boertje) #5

I don't think I can add much more here.

Some things to consider:

  1. Syslog has two RFCs with two very different timestamp formats. Which one are you receiving?
  2. Your date filter will try "MMM d HH:mm:ss" first but it only applies for 9 days in a month, you should switch the formats around.

(Yuri Sibirski) #6

By switching around you mean this: match => [ "syslog_timestamp", "MMM dd HH:mm:ss", "MMM d HH:mm:ss" ]

(Guy Boertje) #7

yes. its a minor perf improvement for 2 thirds of a month.

(system) #8

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.