After opening ports 9200 and 9300-9400 for TCP & UDP, I get this error from Logstash. I am running both on the same Red Hat Enterprise Linux 7.0 server and can successfully log data when the firewall (firewalld) is disabled - not long term solution.
I tried with default ES configuration and with "discovery.zen.ping.multicast.enabled: false" and "discovery.zen.ping.unicast.hosts: " set to my IP address - got same error.
Here is my logstash command with output (works when firewall is disabled):
$ ./logstash --verbose --debug -e 'input { stdin { } } output { stdout { } elasticsearch { cluster => "scribe_bld5" } }'
Pipeline started {:level=>:info}
log4j, [2015-10-16T10:14:13.109] INFO: org.elasticsearch.node: [logstash-elovftardisbld5.labs.isgs.lmco.com-25486-2010] version[1.1.1], pid[25486], build[f1585f0/2014-04-16T14:27:12Z]
log4j, [2015-10-16T10:14:13.111] INFO: org.elasticsearch.node: [logstash-elovftardisbld5.labs.isgs.lmco.com-25486-2010] initializing ...
log4j, [2015-10-16T10:14:13.130] INFO: org.elasticsearch.plugins: [logstash-elovftardisbld5.labs.isgs.lmco.com-25486-2010] loaded [], sites []
log4j, [2015-10-16T10:14:17.472] INFO: org.elasticsearch.node: [logstash-elovftardisbld5.labs.isgs.lmco.com-25486-2010] initialized
log4j, [2015-10-16T10:14:17.473] INFO: org.elasticsearch.node: [logstash-elovftardisbld5.labs.isgs.lmco.com-25486-2010] starting ...
log4j, [2015-10-16T10:14:17.638] INFO: org.elasticsearch.transport: [logstash-elovftardisbld5.labs.isgs.lmco.com-25486-2010] bound_address {inet[/0:0:0:0:0:0:0:0:9301]}, publish_address {inet[/166.17.25.142:9301]}
log4j, [2015-10-16T10:14:47.679] WARN: org.elasticsearch.discovery: [logstash-elovftardisbld5.labs.isgs.lmco.com-25486-2010] waited for 30s and no initial state was set by the discovery
log4j, [2015-10-16T10:14:47.680] INFO: org.elasticsearch.discovery: [logstash-elovftardisbld5.labs.isgs.lmco.com-25486-2010] scribe_bld5/mn5UU6dLS8epxlrw-BAfbA
log4j, [2015-10-16T10:14:47.698] INFO: org.elasticsearch.node: [logstash-elovftardisbld5.labs.isgs.lmco.com-25486-2010] started
New Elasticsearch output {:cluster=>"scribe_bld5", :host=>nil, :port=>"9300-9305", :embedded=>false, :protocol=>"node", :level=>:info}
Automatic template management enabled {:manage_template=>"true", :level=>:info}
Using mapping template {:template=>"{ "template" : "logstash-", "settings" : { "index.refresh_interval" : "5s" }, "mappings" : { "default" : { "_all" : {"enabled" : true}, "dynamic_templates" : [ { "string_fields" : { "match" : "", "match_mapping_type" : "string", "mapping" : { "type" : "string", "index" : "analyzed", "omit_norms" : true, "fields" : { "raw" : {"type": "string", "index" : "not_analyzed", "ignore_above" : 256} } } } } ], "properties" : { "@version": { "type": "string", "index": "not_analyzed" }, "geoip" : { "type" : "object", "dynamic": true, "path": "full", "properties" : { "location" : { "type" : "geo_point" } } } } } }}", :level=>:info}
Exception in thread ">output" org.elasticsearch.discovery.MasterNotDiscoveredException: waited for [30s]
at org.elasticsearch.action.support.master.TransportMasterNodeOperationAction$3.onTimeout(org/elasticsearch/action/support/master/TransportMasterNodeOperationAction.java:180)
at org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(org/elasticsearch/cluster/service/InternalClusterService.java:492)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java/util/concurrent/ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java/util/concurrent/ThreadPoolExecutor.java:615)
at java.lang.Thread.run(java/lang/Thread.java:745)
I've tried with Logstash's host/port and bind_host/bind_port settings (where port was 9200 and 9300) - still not getting through.