Getting Flush downstream error

Hi,

i have currently setup ELK stack with filebeat as log shipper.
Everything works good.

But sometimes, logstash not passing the logs to elasticsearch.
I'm getting error as

{:timestamp=>"2016-08-12T08:58:54.724000+0000", :message=>"Multiline: flush downstream error", :exception=>#<LogStash::Inputs::Beats::InsertingToQueueTakeTooLong: LogStash::Inputs::Beats::InsertingToQueueTakeTooLong>, :level=>:error}

Here's my logstash config

input {
  beats {
    port => 5044
    codec => multiline {
      pattern => "^(\s|\-)"
      what => "previous"
    }
 }
}

filter {
  # nginx access log
  if [source] =~ /\/(access)\d{0,10}\.(log)/ {
    grok {
      match => {"message" => "%{COMBINEDAPACHELOG}"}
      add_tag => ["nginx_access_log"]
    }
  }

  # nginx error log
  if [source] =~ /\/(error)\d{0,10}\.(log)/ {
    grok {
      match => {"message" => "%{DATE_US} %{TIME} %{GREEDYDATA}"}
      add_tag => ["nginx_error_log"]
    }
  }
}

output {
  elasticsearch {
    hosts => "ELASTICSEARCH HOST"
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

Not sure what's wrong, hopefully someone here can help.
Thank you.

Do you have multiline anywhere?

Yes, i have multiline in some type of logs.
I set beats input to use multiline codec.
If i wrong when config beats input?

So where is that in the config?
We really need to see the entire thing.

Hi @kandito I have exactly the same issue with multiline logs.

My Logstash input is:

input {
  beats {
    port => 4444
    codec => multiline
    {
      pattern => "^%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{HOUR}:%{MINUTE}:%{SECOND} "
      negate => true
      what => previous
    }
  }
}

and Logstash error message is:

{:timestamp=>"2017-01-10T05:26:27.717000+0100", :message=>"Multiline: flush downstream error", :exception=>#<LogStash::Inputs::Beats::InsertingToQueueTakeTooLong: LogStash::Inputs::Beats::InsertingToQ
ueueTakeTooLong>, :level=>:error}

Did you find a fix for this?

My Elasticsearch and Logstash version is 2.3 on CentOS 7.2
Filebeat version in 1.2.3 on RedHat 5.11

Hi @MaxCor,

i have not find the solution, but after reducing grok filter operation in logstash, i never get this error again until now.

I have a "big" filter section in Logstash configuration.
I'm using Filebeat for log shipping over a VPN from 2 servers.
Then other 2 virtual servers (1 vCPU - 2GB RAM) for Logstash 2.3 with default parameters

I think this is the root cause. I'm wondering if I need to scale (vertically + horizontally ) Logstash.

Vmstat reports system loads are really low:

procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 2  0      0 1179704    188 262472    0    0    49     3   91  201  0  0 99  0  0
 0  0      0 1179580    188 262472    0    0     0     0  100  211  1  0 99  0  0
 0  0      0 1179580    188 262472    0    0     0     0   90  198  0  0 100  0  0