Elasticsearch filter plugin "hosts" does not fail over

Hi, I am trying logstash 7.17.1 with a two-node es cluster (es 7.17.0), which are running on localhost:9200 and localhost:9201. Both logstash and Elasticsearch are running on same windows machine.

I have written a simple logstash config as below:

input {
	java_generator {
		lines => ["test"]
		count => 1
		eps => 1
	}



}


filter {

	elasticsearch {
		hosts =>  ["localhost:9200", "localhost:9201"]
		user => "my-user"
		password => "my-password"
		index => "my-test-index"
		query_template => "/my/path/to/query.json"
		docinfo_fields => {
		 	"_id" => "document_id"
		}
	}

}

output {
		stdout {
			codec => rubydebug {
				metadata => true
			}
		}

}

The query template is nothing but a simple query by IDs:

{
    "query": {
        "ids" : {
          "values" : ["doc123"]
        }
    }
}

Everything works fine when both nodes are running. But when I stop one node, I got the following error (I stopped the node running on localhost:9200)

[2022-03-08T15:10:20,314][ERROR][logstash.javapipeline    ][ES_filter_plugin] Pipeline error {:pipeline_id=>"ES_filter_plugin", :exception=>#<Manticore::SocketException: Connect to localhost:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused: no further information>, :backtrace=>["C:/logstash-7.17.1/vendor/bundle/jruby/2.5.0/gems/manticore-0.8.0-java/lib/manticore/response.rb:36:in `block in initialize'", "C:/logstash-7.17.1/vendor/bundle/jruby/2.5.0/gems/manticore-0.8.0-java/lib/manticore/response.rb:79:in `call'", "C:/logstash-7.17.1/vendor/bundle/jruby/2.5.0/gems/manticore-0.8.0-java/lib/manticore/response.rb:274:in `call_once'", "C:/logstash-7.17.1/vendor/bundle/jruby/2.5.0/gems/manticore-0.8.0-java/lib/manticore/response.rb:158:in `code'", "C:/logstash-7.17.1/vendor/bundle/jruby/2.5.0/gems/elasticsearch-transport-7.17.0/lib/elasticsearch/transport/transport/http/manticore.rb:106:in `block in perform_request'", "C:/logstash-7.17.1/vendor/bundle/jruby/2.5.0/gems/elasticsearch-transport-7.17.0/lib/elasticsearch/transport/transport/base.rb:289:in `perform_request'", "C:/logstash-7.17.1/vendor/bundle/jruby/2.5.0/gems/elasticsearch-transport-7.17.0/lib/elasticsearch/transport/transport/http/manticore.rb:85:in `perform_request'", "C:/logstash-7.17.1/vendor/bundle/jruby/2.5.0/gems/elasticsearch-transport-7.17.0/lib/elasticsearch/transport/client.rb:197:in `perform_request'", "C:/logstash-7.17.1/vendor/bundle/jruby/2.5.0/gems/elasticsearch-7.17.0/lib/elasticsearch.rb:41:in `method_missing'", "C:/logstash-7.17.1/vendor/bundle/jruby/2.5.0/gems/elasticsearch-api-7.17.0/lib/elasticsearch/api/actions/ping.rb:38:in `ping'", "C:/logstash-7.17.1/vendor/bundle/jruby/2.5.0/gems/logstash-filter-elasticsearch-3.11.1/lib/logstash/filters/elasticsearch.rb:324:in `test_connection!'", "C:/logstash-7.17.1/vendor/bundle/jruby/2.5.0/gems/logstash-filter-elasticsearch-3.11.1/lib/logstash/filters/elasticsearch.rb:113:in `register'", "org/logstash/config/ir/compiler/AbstractFilterDelegatorExt.java:75:in `register'", "C:/logstash-7.17.1/logstash-core/lib/logstash/java_pipeline.rb:232:in `block in register_plugins'", "org/jruby/RubyArray.java:1821:in `each'", "C:/logstash-7.17.1/logstash-core/lib/logstash/java_pipeline.rb:231:in `register_plugins'", "C:/logstash-7.17.1/logstash-core/lib/logstash/java_pipeline.rb:590:in `maybe_setup_out_plugins'", "C:/logstash-7.17.1/logstash-core/lib/logstash/java_pipeline.rb:244:in `start_workers'", "C:/logstash-7.17.1/logstash-core/lib/logstash/java_pipeline.rb:189:in `run'", "C:/logstash-7.17.1/logstash-core/lib/logstash/java_pipeline.rb:141:in `block in start'"], "pipeline.sources"=>["C:/testPipeline/test_pipeline.conf"], :thread=>"#<Thread:0x19bf3908 run>"}

I am expecting that I should get result as the node localhost:9201 is still running and hence the filter client shall send the query to this node.
Then I tested with the following:

  1. changed the "hosts" to only "localhost:9201" -> works
  2. Change the order of elements in "hosts" array: hosts => ["localhost:9201", "localhost:9200"] -> doesn't work, got same error.
  3. Bring back the node running on localhost:9200 back on line -> works (obviously)

Seems to me the fail-over of Elasticsearch filter plugin does not work.
Or I simply did something wrong about the config file, which I sincerely wish I did as I have been messing around with this for couple of hours.

Any help is highly appreciated.

Can anyone help reply this? If it is a simple config mistake. a one-liner pointing out my problem can be great help. Thanks

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.