Problems getting started w/ log stash and elasticsearch

Hello,

I just installed elasticsearch and logstash on a Mac (Java 1.7). Elasticsearch seems to be running as expected and while logstash works with the simplest examples, it fails when I try to connect it to elastic search:

puma:logstash-1.5.0 jcc$ logstash -e 'input { stdin { } } output { elasticsearch { host => localhost } }'

testing 1,2,3...
log4j, [2015-05-16T21:39:32.070]  WARN: org.elasticsearch.discovery: [logstash-   puma.local-89867-2010] waited for 30s and no initial state was set by the discovery
Exception in thread ">output" org.elasticsearch.discovery.MasterNotDiscoveredException: waited for [30s]
at     org.elasticsearch.action.support.master.TransportMasterNodeOperationAction$3.onTimeout(org/elasticsearch/action/support/master/TransportMasterNodeOperationAction.java:180)
at org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(org/elasticsearch/cluster/service/InternalClusterService.java:492)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java/util/concurrent/ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java/util/concurrent/ThreadPoolExecutor.java:615)
at java.lang.Thread.run(java/lang/Thread.java:745)

This log stash 1.5.0 and elastic search 1.5.2 on OS X 10.9.5 w/ JDK 1.7.0_60.

Can anyone point out what I'm missing here?

Thanks!

--john

Do you know ES has started, have you tried checking the status?

Thanks for your reply Mark. Yes, ES is running and seems to respond normally to http input via cURL. Using the same versions of ES and LogStash, I was able to run the same example on linux.

โ€”john

This could be a misconfiguration problem.

Try adding protocol => http to your output and trying again.

1 Like

thanks for the suggestion, but

echo "hello world" | ./bin/logstash -e 'input { stdin { } } output { elasticsearch { host => localhost protocol => http } }'

still doesn't seem to be getting anything over to ES. Any special considerations for OS X that I should be aware of?

--john

That command should work. Try enabling verbose logging by passing --verbose or even --debug to get Logstash to log more information about what it's doing.

I have a similar issue as above while connecting to elasticsearch from logstash using stdin input ,

attaching the dump with --debug flag in logstash

Logstash startup completed
send this message from stdin to Elasticsearch
โ†[36moutput received {:event=>{"message"=>"send this message from stdin to Elasticsearch\r", "@version"=>"1", "@timestamp"=>"2015-05-27T17:08:37.742Z", "type"=

"human", "inputsource"=>"stdin", "host"=>"PriyankuK-MSL"}, :level=>:debug, :file=>"(eval)", :line=>"25", :method=>"output_func"}โ†[0m
2015-05-27T17:08:37.742Z PriyankuK-MSL send this message from stdin to Elasticsearch
โ†[36mFlushing output {:outgoing_count=>1, :time_since_last_flush=>50.355, :outgoing_events=>{nil=>[["index", {:_id=>nil, :_index=>"logstash-2015.05.27", :_type=
"human", :_routing=>nil}, #<LogStash::Event:0x728bd4 @metadata_accessors=#<LogStash::Util::Accessors:0x777dffd4 @store={"retry_count"=>0}, @lut={}>, @cancelled
=false, @data={"message"=>"send this message from stdin to Elasticsearch\r", "@version"=>"1", "@timestamp"=>"2015-05-27T17:08:37.742Z", "type"=>"human", "input
source"=>"stdin", "host"=>"PriyankuK-MSL"}, @metadata={"retry_count"=>0}, @accessors=#<LogStash::Util::Accessors:0x18ec3689 @store={"message"=>"send this messag
e from stdin to Elasticsearch\r", "@version"=>"1", "@timestamp"=>"2015-05-27T17:08:37.742Z", "type"=>"human", "inputsource"=>"stdin", "host"=>"PriyankuK-MSL"},
@lut={"type"=>[{"message"=>"send this message from stdin to Elasticsearch\r", "@version"=>"1", "@timestamp"=>"2015-05-27T17:08:37.742Z", "type"=>"human", "inp
utsource"=>"stdin", "host"=>"PriyankuK-MSL"}, "type"], "inputsource"=>[{"message"=>"send this message from stdin to Elasticsearch\r", "@version"=>"1", "@timest
amp"=>"2015-05-27T17:08:37.742Z", "type"=>"human", "inputsource"=>"stdin", "host"=>"PriyankuK-MSL"}, "inputsource"], "host"=>[{"message"=>"send this message fro
m stdin to Elasticsearch\r", "@version"=>"1", "@timestamp"=>"2015-05-27T17:08:37.742Z", "type"=>"human", "inputsource"=>"stdin", "host"=>"PriyankuK-MSL"}, "hos
t"], "message"=>[{"message"=>"send this message from stdin to Elasticsearch\r", "@version"=>"1", "@timestamp"=>"2015-05-27T17:08:37.742Z", "type"=>"human", "in
putsource"=>"stdin", "host"=>"PriyankuK-MSL"}, "message"]}>>]]}, :batch_timeout=>1, :force=>nil, :final=>nil, :level=>:debug, :file=>"/Priyanku/elasticsearch/lo
gstash/logstash-1.5.0/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb", :line=>"207", :method=>"buffer_flush"}โ†[0m
โ†[36mSending bulk of actions to client[0]: localhost {:level=>:debug, :file=>"/Priyanku/elasticsearch/logstash/logstash-1.5.0/vendor/bundle/jruby/1.9/gems/logst
ash-output-elasticsearch-0.2.4-java/lib/logstash/outputs/elasticsearch.rb", :line=>"461", :method=>"flush"}โ†[0m
โ†[31mGot error to send bulk of actions to elasticsearch server at localhost : blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];[SERVICE_UNA
VAILABLE/2/no master]; {:level=>:error, :file=>"/Priyanku/elasticsearch/logstash/logstash-1.5.0/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.2.4
-java/lib/logstash/outputs/elasticsearch.rb", :line=>"464", :method=>"flush"}โ†[0m
โ†[33mFailed to flush outgoing items {:outgoing_count=>1, :exception=>org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/s
tate not recovered / initialized];[SERVICE_UNAVAILABLE/2/no master];, :backtrace=>["org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(org/ela
sticsearch/cluster/block/ClusterBlocks.java:151)", "org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedRaiseException(org/elasticsearch/cluster/block/Cl
usterBlocks.java:141)", "org.elasticsearch.action.bulk.TransportBulkAction.executeBulk(org/elasticsearch/action/bulk/TransportBulkAction.java:210)", "org.elasti
csearch.action.bulk.TransportBulkAction.access$000(org/elasticsearch/action/bulk/TransportBulkAction.java:73)", "org.elasticsearch.action.bulk.TransportBulkActi
on$1.onFailure(org/elasticsearch/action/bulk/TransportBulkAction.java:148)", "org.elasticsearch.action.support.TransportAction$ThreadedActionListener$2.run(org/
elasticsearch/action/support/TransportAction.java:137)", "java.util.concurrent.ThreadPoolExecutor.runWorker(java/util/concurrent/ThreadPoolExecutor.java:1142)",
"java.util.concurrent.ThreadPoolExecutor$Worker.run(java/util/concurrent/ThreadPoolExecutor.java:617)", "java.lang.Thread.run(java/lang/Thread.java:745)"], :le
vel=>:warn, :file=>"/Priyanku/elasticsearch/logstash/logstash-1.5.0/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb", :line=>"231", :method=>"buffer
_flush"}โ†[0m

below is the logstash conf file

input {
stdin {
add_field => {inputsource => "stdin"} # hash (optional), default: {}
#codec => ... # codec (optional), default: "plain"
#debug => ... # boolean (optional), default: false
#tags => ... # array (optional)
type => "human" # string (optional)
}
}

output {
stdout {
}

elasticsearch {
host => localhost

}
}

@Priyanku_konar please start your own thread for your question.

@jcc I was able to resolve a similar problem by adding the name of my elasticsearch cluster to the config file:

input { stdin { } }
output {
  elasticsearch {
    host => localhost
    cluster => elasticsearch_brew
  }
}

I suspect the problem has something to do with the default Elasticsearch configuration that results from installing Elasticsearch via Homebrew, but I haven't dug in much further.

Hope this isn't too late and helps!

1 Like

I had to do the same thing when I was initially playing with ELK together.

Had to do same thing. Took a while to figure out from the lack of a useful error.