Fail to Flush

I'm getting 'failed to flush' error. I'm confused and not quite certain what I'm doing wrong. What can I look at in ES to tell me where I"m going astray?

CentOs 6
ES 6.1

I have a plugin called Elastic HQ installed, which seems helpful except that when this error message pops it's unable to load.

Edit: I reduced the worker count by two and it's processing ok, now. We'll see.

I'm trying to load files in from disk like so

cat /var/log/muo-2014/app01/*201403* | /opt/logstash/bin/logstash -f /root/muo/fileimport.conf -w 14

where fileimport.conf is

input {
      stdin {
          type => "apache"
          codec => plain {
                    charset => "ISO-8859-1"
        }
      }
}


filter {
  if [type] == "apache" {
    grok {
      match => { "message" => "%{COMBINEDAPACHELOG}" }
    }
    date {
         match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
    }
    geoip {
      source => "clientip"
      target => "geoip"
      database => "/etc/logstash/GeoLiteCity.dat"
      add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
      add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
    }
    mutate {
      convert => [ "[geoip][coordinates]", "float"]
    }

    if [clientip] in ["10.1.88.11", "10.1.88.12", "10.1.88.13", "10.1.88.14", "10.1.88.15", "10.1.88.16", "10.1.42.117", "10.1.42.118", "10.1.42.119", "10.1.88.21", "10.1.88.22", "10.1.88.23", "10.1.88.24", "10.1.88.25", "10.1.88.26", "10.1.42.127", "10.1.42.128", "10.1.42.129"]  {
       drop {}
    }
  }
}


output {
    elasticsearch {
        cluster => "elasticsearch.local"
        host => "127.0.0.1"
        protocol => http
        index_type => "apache"
        workers => "14"
    }
}

Error message

(Got error to send bulk of actions to elasticsearch server at 127.0.0.1 : Read timed out {:level=>:error}
    Failed to flush outgoing items {:outgoing_count=>2960, :exception=>#<Manticore::Timeout: Read timed out>, :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.4.1-java/lib/manticore/response.rb:35:in `initialize'", "org/jruby/RubyProc.java:271:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.4.1-java/lib/manticore/response.rb:61:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.4.1-java/lib/manticore/response.rb:225:in `call_once'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.4.1-java/lib/manticore/response.rb:128:in `code'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/transport/http/manticore.rb:50:in `perform_request'", "org/jruby/RubyProc.java:271:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/transport/base.rb:187:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/transport/http/manticore.rb:33:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/client.rb:115:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.7/lib/elasticsearch/api/actions/bulk.rb:80:in `bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.2.4-java/lib/logstash/outputs/elasticsearch/protocol.rb:100:in `bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.2.4-java/lib/logstash/outputs/elasticsearch.rb:437:in `submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.2.4-java/lib/logstash/outputs/elasticsearch.rb:436:in `submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.2.4-java/lib/logstash/outputs/elasticsearch.rb:462:in `flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.2.4-java/lib/logstash/outputs/elasticsearch.rb:460:in `flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:219:in `buffer_flush'", "org/jruby/RubyHash.java:1341:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:216:in `buffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:193:in `buffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:112:in `buffer_initialize'", "org/jruby/RubyKernel.java:1507:in `loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:110:in `buffer_initialize'"], :level=>:warn}

It's possible that with 14+ workers you were overloading ES.

I left it at the default (1) and it's still doing it.

Clearly I need to read up on ES and figure out what I'm doing wrong but ... any links I can look at? I'm feel really overwhelmed with this.

Good news: I may have resolved the issue.

As everyone else has probably noticed, I was allocating 'workers' in two places: the command line and the .conf file. So yes, I removed '-w 14' from the command line but neglected to remove it from the conf file.

Removed it from both places and i was able to input a 290mb log file in 15m48s.

Onward.