Using Elasticsearch output for Logstash to send logs to elasticsearch

I have a logstash central linux machine and an Elastsearch linux machine. I am trying to send logstash files to elasticsearch using the elasticsearch output...

Here is the configuration file in logstash using the ES output

output {
elasticsearch { host => 1.1.1.1:9200 }
stdout { codec => rubydebug }
}

Thanks,
Monty

And that's not working? Is there anything interesting in Logstash's logs? What's the name of the Elasticsearch cluster? If you disable the elasticsearch output, are message flowing nicely to stdout?

Name of cluster? Yoda
It is not working.

This is the error message I get when trying to use the host => Private Ip Address

userid@lstash1:~/logstash-1.4.0/bin$ ./logstash --config /etc/logstash/conf.d
Error: Expected one of #, } at line 11, column 31 (byte 233) after output {
elasticsearch { host => 155.97
You may be interested in the '--configtest' flag which you can
use to validate logstash's configuration before you choose
to restart a running system.

This is working now...

output {
elasticsearch { host => "1.1.1.1:9200" }
stdout { codec => rubydebug }
}

Apart from the syntax snafu, it's working now because you've switched to the HTTP protocol. Previously you used the node protocol but still used the default cluster name which won't work since you've renamed the cluster.

Now I am getting these messages on logstash central and windows server 2012 (running LSF) after they connect.

I also believe that my LS is not connecting to ES cluster properly based on LS error message (second message below).

Windows server 2012 R2:

2015/09/15 13:21:28.388106 Connected to ipaddress
2015/09/15 13:21:43.391154 Read error looking for ack: WSARecv tcp 1.1.1.1:53354: i/o timeout
2015/09/15 13:21:43.392143 Setting trusted CA from file: C:/Users/userID/Desktop/LSF/certs/lsfcert.crt
2015/09/15 13:21:43.394143 Connecting to [ipaddress]:5000 (ipaddress)
2015/09/15 13:21:43.475158 Connected to ipaddress

Central Logstash - Using LSF output

userid@lstash1:~/logstash-1.4.0/bin$ ./logstash --config /etc/logstash/conf.d
Using milestone 1 input plugin 'lumberjack'. This plugin should work, but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin. For more information on plugin milestones, see http://logstash.net/docs/1.4.0/plugin-milestones {:level=>:warn}
log4j, [2015-09-15T13:22:00.321] WARN: org.elasticsearch.discovery: [logstash-lstash1-28278-2010] waited for 30s and no initial state was set by the discovery
Exception in thread ">output" org.elasticsearch.discovery.MasterNotDiscoveredException: waited for [30s]
at org.elasticsearch.action.support.master.TransportMasterNodeOperationAction$3.onTimeout(org/elasticsearch/action/support/master/TransportMasterNodeOperationAction.java:180)
at org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(org/elasticsearch/cluster/service/InternalClusterService.java:491)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java/util/concurrent/ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java/util/concurrent/ThreadPoolExecutor.java:615)
at java.lang.Thread.run(java/lang/Thread.java:745)

Hmm. It actually looks like you're not getting the HTTP protocol after all. Set the cluster parameter to the name of the cluster.

Everything is working now. I am successfully getting logs from a flat file on windows server > Logstash > Elasticsearch > Kibana.

My next quest is starting everything as a service... I'm reading about debian/RPM packages