this weekend i've been attempting to get my cluster running.
I chose the Azure market place Elastic/Kibana/X-PACK setup and configured it like this
Single Subnet 10.0.0.0/24
3 data nodes 10.0.0.6, 10.0.0.7, 10.0.0.8
Internal Load Balancer 10.0.0.4
1 Kibana Server 10.0.0.5
When everything had installed, i could access the kibana site via the public address and see the cluster state etc.. but i had no way of pushing data into the 3 data nodes that are running elasticsearch.
From the kibana server i can telnet to ports 9200 and 9300 on the three nodes.
After adding an external IP to each of the three nodes, telnetting to them, i can also see 'back' to the kibana server.
So i install Logstash onto the Kibana server and configure a plugin to pull data from a soure and push it to one of the data nodes... i start getting problems... and just to cover the loadbalancer issue, i tried pushing to that also.
i have three issues i think;
- can i see in the monitoring page in kibana, the log stash server as part of the cluster or am i not able to do this? or shouldn't ?!
- i tried pushing data to the ES cluster and got this error
{:timestamp=>"2017-11-19T19:58:39.159000+0000", :message=>"An unexpected error occurred!", :error=>#<URI::InvalidURIError: path conflicts with opaque>, :class=>"URI::InvalidURIError", :backtrace=>["/opt/logstash/vendor/jruby/lib/ruby/1.9/uri/generic.rb:815:in check_path'", "/opt/logstash/vendor/jruby/lib/ruby/1.9/uri/generic.rb:870:in
path='", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:178:in host_to_url'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:109:in
build_client'", "org/jruby/RubyArray.java:2414:in map'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:109:in
build_client'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:20:in initialize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client_builder.rb:40:in
build'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch.rb:132:in build_client'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:14:in
register'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/output_delegator.rb:75:in register'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:173:in
start_workers'", "org/jruby/RubyArray.java:1613:in each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:173:in
start_workers'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:126:in run'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/agent.rb:210:in
execute'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/runner.rb:90:in run'", "org/jruby/RubyProc.java:281:in
call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/runner.rb:95:in run'", "org/jruby/RubyProc.java:281:in
call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/task.rb:24:in `initialize'"], :level=>:warn}
using the following plugin;
input {
twitter {
consumer_key => "REDACTED"
consumer_secret => "REDACTED"
oauth_token => "REDACTED"
oauth_token_secret => "REDACTED"
keywords => ["Call of Duty"]
full_tweet => true
}
}
output {
elasticsearch {
hosts => "10.0.0.4" <---- tried also the other ES IP addresses
bind_port => 9300
index => "*"
cluster => "Intel-CL"
document_type => "Twitter"
node_name => 'intelkibana'
}
}
Also i'm not seeing any connections on any of the ES servers running netstat -nlp so its like the Logstash service running on Kibana isn't even connecting, yet i know the ports are open. So should i have a dedicated logstash server? should i not have logstash running on the kibana server?
- is there any documentation/blogs of anyone who's run the same Azure market place setup ( near enough ) and then added a logstash server into the same subnet and configured it to push data to the ES nodes? as all the research and the blind alleys i've gone down this weekend , two days of chasing links and testing ideas based on Bitnami, LOGZ.IO and many other examples, i seriously do not understand why there is so much information but none of it is relevant to logstash and clusters... plenty of base data, but nothing on how to pull it all together.
I could instead just create one single large VM in azure, install ELK and have it running in about an hour, but i thought i'd try the clustering approach and use the market place.... im thinking this is a mistake as the market place product should be complete and not half finished as it is, and should have some sort of information on how to push info into it, as on it's own, its pretty useless from my point of view.