Unable to connect to Elasticsearch at http://localhost:9200 on Kubernetes

I have deployed elasticsearch cluster with 3 master nodes, 3 data nodes and 2 client nodes inside kubernetes using elasticsearch:5.5.0 and kibana:2.1.0, I could able to check the cluster health which looks good.

"cluster_name" : "elk-poc",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 8,
"number_of_data_nodes" : 3,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0

From one of the kubernetes instance I could able to curl following:
curl http://elasticsearch.default.svc.cluster.local:9200
"name" : "es-client-296809788-l1zhg",
"cluster_name" : "elk-poc",
"cluster_uuid" : "ddILZMGaTTGtk8Oe1m5EjQ",
"version" : {
"number" : "5.5.0",
"build_hash" : "260387d",
"build_date" : "2017-06-30T23:16:05.735Z",
"build_snapshot" : false,
"lucene_version" : "6.6.0"
"tagline" : "You Know, for Search"

But on Kibana page I could see unable to connect elasticsearch at http://localhost:9200

Though I tried curl to http://localhost:9200, which show connection refused.

curl: (7) Failed to connect to port 9200: Connection refused

Kibana 2.1.0 sounds wrong. make sure your are using Kibana 5.5.0 as it needs to align with the Elasticsearch version. Is Kibana running together with the client node?

Thanks for the your response, I have migrated to use ELK 5.5.0 versions.. Now things are looking good.. but my logstash container is crashing with following error:

camogts0002 [6:57] [edibehe/ERIKUBE/ELK_POC_550] -> kubectl logs logstash-2079140465-jx2st
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
Sending Logstash's logs to /usr/share/logstash/logs which is now configured via log4j2.properties
10:57:17.528 [main] INFO logstash.setting.writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
10:57:17.534 [main] INFO logstash.setting.writabledirectory - Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
10:57:17.559 [LogStash::Runner] INFO logstash.agent - No persistent UUID file found. Generating new UUID {:uuid=>"8f12d382-e32c-4fec-8145-130e3a1b8b1a", :path=>"/usr/share/logstash/data/uuid"}
10:57:17.765 [[main]-pipeline-manager] INFO logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
10:57:17.790 [[main]-pipeline-manager] INFO logstash.pipeline - Pipeline main started
10:57:17.857 [Api Webserver] INFO logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
10:57:20.821 [LogStash::Runner] WARN logstash.agent - stopping pipeline {:id=>"main"}

  • I think I have to delete client-node when running logstash and kibana nodes..

I had a problem with my logstash.conf file, fixed now and working - need to know about the client-node and logstash kibana pods running together consequences

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.