Elasticsearch went nuts because we had a client trying to send to a closed index?

Someone's redis queue was really backed up, and was trying to send (using
logstash elasticsearch_http plugin) messages
to a closed index.

Which resulted in thousands of these:

{:timestamp=>"2015-01-14T10:24:19.883000-0500", :message=>"Failed to flush
outgoing items", :outgoing_count=>1000, :exception=>#<RuntimeError: Non-OK
response code from Elasticsearch: 404>,
:backtrace=>["/opt/logstash/lib/logstash/outputs/elasticsearch/protocol.rb:127:in
bulk_ftw'", "/opt/logstash/lib/logstash/outputs/elasticsearch/protocol.rb:80:inbulk'", "/opt/logstash/lib/logstash/outputs/elasticsearch.rb:321:in
flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:219:inbuffer_flush'", "org/jruby/RubyHash.java:1339:in each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:216:inbuffer_flush'",
"/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:193:in
buffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:159:inbuffer_receive'",
"/opt/logstash/lib/logstash/outputs/elasticsearch.rb:317:in receive'", "/opt/logstash/lib/logstash/outputs/base.rb:86:inhandle'",
"/opt/logstash/lib/logstash/outputs/base.rb:78:in `worker_setup'"],
:level=>:warn}

{:timestamp=>"2015-01-14T10:36:03.399000-0500", :message=>"Failed to flush
outgoing items", :outgoing_count=>400, :exception=>RuntimeError,
:backtrace=>["/opt/logstash/lib/logstash/outputs/elasticsearch_http.rb:240:in
post'", "/opt/logstash/lib/logstash/outputs/elasticsearch_http.rb:213:inflush'",
"/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:219:in
buffer_flush'", "org/jruby/RubyHash.java:1339:ineach'",
"/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:216:in
buffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:193:inbuffer_flush'",
"/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:159:in
buffer_receive'", "/opt/logstash/lib/logstash/outputs/elasticsearch_http.rb:191:inreceive'", "/opt/logstash/lib/logstash/outputs/base.rb:86:in handle'", "/opt/logstash/lib/logstash/outputs/base.rb:78:inworker_setup'"],
:level=>:warn}
{:timestamp=>"2015-01-14T10:36:03.577000-0500", :message=>"Error writing
(bulk) to elasticsearch", :response=>#<FTW::Response:0x67e136d2
@headers=FTW::HTTP::Headers <{"content-type"=>"application/json;
charset=UTF-8", "content-length"=>"77"}>, @body=<FTW::Connection(@4022)
@destinations=["logs.vistaprint.svc:9200"] @connected=true
@remote_address="10.89.238.12" @secure=false >, @status=404, @reason="Not
Found", @logger=#<Cabin::Channel:0x1c7f97ce
@subscriber_lock=#Mutex:0x7cc763ff, @data={},
@metrics=#<Cabin::Metrics:0x3bf0ac5f @channel=#<Cabin::Channel:0x1c7f97ce
...>, @metrics={}, @metrics_lock=#Mutex:0x3ec32f5>, @subscribers={},
@level=:info>, @version=1.1>,
:response_body=>"{"error":"IndexMissingException[[logstash-2014.12.27]
missing]","status":404}", :request_body=>"", :level=>:error}

I happened to notice the index name 'logstash-2014.12.17'

This caused everything to backup. Is there a setting somewhere that I can
tell elasticsearch to drop that on the floor?

Thanks.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/c5e6de27-d87f-4b67-99ce-d3f1972ad8d2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

There is nothing in ES that can do this, because it's essentially invisible
data loss, which is bad :slight_smile:

On 15 January 2015 at 05:15, Eric Fontana eric@fontanas.net wrote:

Someone's redis queue was really backed up, and was trying to send (using
logstash elasticsearch_http plugin) messages
to a closed index.

Which resulted in thousands of these:

{:timestamp=>"2015-01-14T10:24:19.883000-0500", :message=>"Failed to flush
outgoing items", :outgoing_count=>1000, :exception=>#<RuntimeError: Non-OK
response code from Elasticsearch: 404>,
:backtrace=>["/opt/logstash/lib/logstash/outputs/elasticsearch/protocol.rb:127:in
`bulk_ftw'",
"/opt/logstash/lib/logstash/outputs/elasticsearch/protocol.rb:80:in
`bulk'", "/opt/logstash/lib/logstash/outputs/elasticsearch.rb:321:in
`flush'",
"/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:219:in
`buffer_flush'", "org/jruby/RubyHash.java:1339:in `each'",
"/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:216:in
`buffer_flush'",
"/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:193:in
`buffer_flush'",
"/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:159:in
`buffer_receive'",
"/opt/logstash/lib/logstash/outputs/elasticsearch.rb:317:in `receive'",
"/opt/logstash/lib/logstash/outputs/base.rb:86:in `handle'",
"/opt/logstash/lib/logstash/outputs/base.rb:78:in `worker_setup'"],
:level=>:warn}

{:timestamp=>"2015-01-14T10:36:03.399000-0500", :message=>"Failed to flush
outgoing items", :outgoing_count=>400, :exception=>RuntimeError,
:backtrace=>["/opt/logstash/lib/logstash/outputs/elasticsearch_http.rb:240:in
post'", "/opt/logstash/lib/logstash/outputs/elasticsearch_http.rb:213:in flush'",
"/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:219:in
buffer_flush'", "org/jruby/RubyHash.java:1339:in each'",
"/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:216:in
buffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:193:in buffer_flush'",
"/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:159:in
buffer_receive'", "/opt/logstash/lib/logstash/outputs/elasticsearch_http.rb:191:in receive'", "/opt/logstash/lib/logstash/outputs/base.rb:86:in handle'", "/opt/logstash/lib/logstash/outputs/base.rb:78:in worker_setup'"],
:level=>:warn}
{:timestamp=>"2015-01-14T10:36:03.577000-0500", :message=>"Error writing
(bulk) to elasticsearch", :response=>#<FTW::Response:0x67e136d2
@headers=FTW::HTTP::Headers <{"content-type"=>"application/json;
charset=UTF-8", "content-length"=>"77"}>, @body=<FTW::Connection(@4022)
@destinations=["logs.vistaprint.svc:9200"] @connected=true
@remote_address="10.89.238.12" @secure=false >, @status=404, @reason="Not
Found", @logger=#<Cabin::Channel:0x1c7f97ce
@subscriber_lock=#Mutex:0x7cc763ff, @data={},
@metrics=#<Cabin::Metrics:0x3bf0ac5f @channel=#<Cabin::Channel:0x1c7f97ce
...>, @metrics={}, @metrics_lock=#Mutex:0x3ec32f5>, @subscribers={},
@level=:info>, @version=1.1>,
:response_body=>"{"error":"IndexMissingException[[logstash-2014.12.27]
missing]","status":404}", :request_body=>"", :level=>:error}

I happened to notice the index name 'logstash-2014.12.17'

This caused everything to backup. Is there a setting somewhere that I can
tell elasticsearch to drop that on the floor?

Thanks.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/c5e6de27-d87f-4b67-99ce-d3f1972ad8d2%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/c5e6de27-d87f-4b67-99ce-d3f1972ad8d2%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEYi1X-7rexj0XNA2GcQuvSi5yAu-AGZSA21AYCSbVjc17sjRQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.