MasterNotDiscoveredException with RHEL 7 Firewall on LS 1.4.2 & ES 1.2.1

After opening ports 9200 and 9300-9400 for TCP & UDP, I get this error from Logstash. I am running both on the same Red Hat Enterprise Linux 7.0 server and can successfully log data when the firewall (firewalld) is disabled - not long term solution.

I tried with default ES configuration and with "discovery.zen.ping.multicast.enabled: false" and "discovery.zen.ping.unicast.hosts: " set to my IP address - got same error.

Here is my logstash command with output (works when firewall is disabled):
$ ./logstash --verbose --debug -e 'input { stdin { } } output { stdout { } elasticsearch { cluster => "scribe_bld5" } }'
Pipeline started {:level=>:info}
log4j, [2015-10-16T10:14:13.109] INFO: org.elasticsearch.node: [logstash-elovftardisbld5.labs.isgs.lmco.com-25486-2010] version[1.1.1], pid[25486], build[f1585f0/2014-04-16T14:27:12Z]
log4j, [2015-10-16T10:14:13.111] INFO: org.elasticsearch.node: [logstash-elovftardisbld5.labs.isgs.lmco.com-25486-2010] initializing ...
log4j, [2015-10-16T10:14:13.130] INFO: org.elasticsearch.plugins: [logstash-elovftardisbld5.labs.isgs.lmco.com-25486-2010] loaded [], sites []
log4j, [2015-10-16T10:14:17.472] INFO: org.elasticsearch.node: [logstash-elovftardisbld5.labs.isgs.lmco.com-25486-2010] initialized
log4j, [2015-10-16T10:14:17.473] INFO: org.elasticsearch.node: [logstash-elovftardisbld5.labs.isgs.lmco.com-25486-2010] starting ...
log4j, [2015-10-16T10:14:17.638] INFO: org.elasticsearch.transport: [logstash-elovftardisbld5.labs.isgs.lmco.com-25486-2010] bound_address {inet[/0:0:0:0:0:0:0:0:9301]}, publish_address {inet[/166.17.25.142:9301]}
log4j, [2015-10-16T10:14:47.679] WARN: org.elasticsearch.discovery: [logstash-elovftardisbld5.labs.isgs.lmco.com-25486-2010] waited for 30s and no initial state was set by the discovery
log4j, [2015-10-16T10:14:47.680] INFO: org.elasticsearch.discovery: [logstash-elovftardisbld5.labs.isgs.lmco.com-25486-2010] scribe_bld5/mn5UU6dLS8epxlrw-BAfbA
log4j, [2015-10-16T10:14:47.698] INFO: org.elasticsearch.node: [logstash-elovftardisbld5.labs.isgs.lmco.com-25486-2010] started
New Elasticsearch output {:cluster=>"scribe_bld5", :host=>nil, :port=>"9300-9305", :embedded=>false, :protocol=>"node", :level=>:info}
Automatic template management enabled {:manage_template=>"true", :level=>:info}
Using mapping template {:template=>"{ "template" : "logstash-", "settings" : { "index.refresh_interval" : "5s" }, "mappings" : { "default" : { "_all" : {"enabled" : true}, "dynamic_templates" : [ { "string_fields" : { "match" : "", "match_mapping_type" : "string", "mapping" : { "type" : "string", "index" : "analyzed", "omit_norms" : true, "fields" : { "raw" : {"type": "string", "index" : "not_analyzed", "ignore_above" : 256} } } } } ], "properties" : { "@version": { "type": "string", "index": "not_analyzed" }, "geoip" : { "type" : "object", "dynamic": true, "path": "full", "properties" : { "location" : { "type" : "geo_point" } } } } } }}", :level=>:info}
Exception in thread ">output" org.elasticsearch.discovery.MasterNotDiscoveredException: waited for [30s]
at org.elasticsearch.action.support.master.TransportMasterNodeOperationAction$3.onTimeout(org/elasticsearch/action/support/master/TransportMasterNodeOperationAction.java:180)
at org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(org/elasticsearch/cluster/service/InternalClusterService.java:492)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java/util/concurrent/ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java/util/concurrent/ThreadPoolExecutor.java:615)
at java.lang.Thread.run(java/lang/Thread.java:745)

I've tried with Logstash's host/port and bind_host/bind_port settings (where port was 9200 and 9300) - still not getting through.

Since you've disabled multicast on the ES side you have to point Logstash's elasticsearch output to one of the ES nodes with the host option.

Logstash 2.0 makes HTTP the default protocol for talking to ES. Many of the Logstash/ES connectivity issues that one can experience will disappear when using HTTP.

Must multicast be disabled (shouldn't matter to me)? If so must unicast hosts be enabled?

With ES config set with
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: [ "166.17.25.142" ]

And the logstash command of
./logstash --verbose --debug -e 'input { stdin { } } output { stdout { } elasticsearch { host => "166.17.25.142" protocol => "http" } }'

Here is my output:
Pipeline started {:level=>:info}
New Elasticsearch output {:cluster=>nil, :host=>"166.17.25.142", :port=>"9200", :embedded=>false, :protocol=>"http", :level=>:info}
Automatic template management enabled {:manage_template=>"true", :level=>:info}
Using mapping template {:template=>"{ "template" : "logstash-", "settings" : { "index.refresh_interval" : "5s" }, "mappings" : { "default" : { "_all" : {"enabled" : true}, "dynamic_templates" : [ { "string_fields" : { "match" : "", "match_mapping_type" : "string", "mapping" : { "type" : "string", "index" : "analyzed", "omit_norms" : true, "fields" : { "raw" : {"type": "string", "index" : "not_analyzed", "ignore_above" : 256} } } } } ], "properties" : { "@version": { "type": "string", "index": "not_analyzed" }, "geoip" : { "type" : "object", "dynamic": true, "path": "full", "properties" : { "location" : { "type" : "geo_point" } } } } } }}", :level=>:info}
asdf
asdf

Faraday::TimeoutError: Timeout::Error
call at /opt/scribe_1.0.1/logstash/vendor/bundle/jruby/1.9/gems/faraday-0.9.0/lib/faraday/adapter/net_http.rb:56
build_response at /opt/scribe_1.0.1/logstash/vendor/bundle/jruby/1.9/gems/faraday-0.9.0/lib/faraday/rack_builder.rb:139
run_request at /opt/scribe_1.0.1/logstash/vendor/bundle/jruby/1.9/gems/faraday-0.9.0/lib/faraday/connection.rb:377
perform_request at /opt/scribe_1.0.1/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.2/lib/elasticsearch/transport/transport/http/faraday.rb:24
call at org/jruby/RubyProc.java:271
perform_request at /opt/scribe_1.0.1/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.2/lib/elasticsearch/transport/transport/base.rb:187
perform_request at /opt/scribe_1.0.1/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.2/lib/elasticsearch/transport/transport/http/faraday.rb:20
perform_request at /opt/scribe_1.0.1/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.2/lib/elasticsearch/transport/client.rb:102
perform_request at /opt/scribe_1.0.1/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.2/lib/elasticsearch/api/namespace/common.rb:21
get_template at /opt/scribe_1.0.1/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.2/lib/elasticsearch/api/actions/indices/get_template.rb:28
template_exists? at /opt/scribe_1.0.1/logstash/lib/logstash/outputs/elasticsearch/protocol.rb:132
template_install at /opt/scribe_1.0.1/logstash/lib/logstash/outputs/elasticsearch/protocol.rb:21
register at /opt/scribe_1.0.1/logstash/lib/logstash/outputs/elasticsearch.rb:259
each at org/jruby/RubyArray.java:1613
outputworker at /opt/scribe_1.0.1/logstash/lib/logstash/pipeline.rb:220
start_outputs at /opt/scribe_1.0.1/logstash/lib/logstash/pipeline.rb:152

Any assistance is appreciated (may need detailed steps for this to work on RHEL 7)

Hmm, looks like a connection timeout. Does curl 166.17.25.142:9200 work?

No, did not work with multicast or unicast

Well, sounds like you have a network issue then.

So is this an Elasticsearch problem (works when firewalld is disabled)?

If your firewall is configured to drop connections to Elasticsearch I'd say you need to adjust your firewall configuration if you want to run Elasticsearch.

After opening port 54328, I was able to get the :"curl" command to work. This topic can be closed