Curl: (7) Failed to connect to localhost port 9200: Connection refused

Someone help!

$ curl -XGET http://localhost:9200/_cluster/health?pretty
curl: (7) Failed to connect to localhost port 9200: Connection refused

Logs:
[2018-10-26T21:16:10,548][DEBUG][o.e.t.ThreadPool ] [ops_es_data-prod-5-a] created thread pool: name [warmer], core [1], max [1], keep alive [5m]
[2018-10-26T21:16:10,554][DEBUG][o.e.c.u.c.QueueResizingEsThreadPoolExecutor] thread pool [ops_es_data-prod-5-a/search] will adjust queue by [50] when determining automatic queue size
[2018-10-26T21:16:10,554][DEBUG][o.e.t.ThreadPool ] [ops_es_data-prod-5-a] created thread pool: name [search], size [4], queue size [100k]
[2018-10-26T21:16:10,554][DEBUG][o.e.t.ThreadPool ] [ops_es_data-prod-5-a] created thread pool: name [flush], core [1], max [1], keep alive [5m]
[2018-10-26T21:16:10,556][DEBUG][o.e.t.ThreadPool ] [ops_es_data-prod-5-a] created thread pool: name [fetch_shard_store], core [1], max [4], keep alive [5m]
[2018-10-26T21:16:10,557][DEBUG][o.e.t.ThreadPool ] [ops_es_data-prod-5-a] created thread pool: name [management], core [1], max [5], keep alive [5m]
[2018-10-26T21:16:10,557][DEBUG][o.e.t.ThreadPool ] [ops_es_data-prod-5-a] created thread pool: name [ml_utility], size [80], queue size [500]
[2018-10-26T21:16:10,558][DEBUG][o.e.t.ThreadPool ] [ops_es_data-prod-5-a] created thread pool: name [get], size [2], queue size [1k]
[2018-10-26T21:16:10,559][DEBUG][o.e.t.ThreadPool ] [ops_es_data-prod-5-a] created thread pool: name [analyze], size [1], queue size [16]
[2018-10-26T21:16:10,559][DEBUG][o.e.t.ThreadPool ] [ops_es_data-prod-5-a] created thread pool: name [write], size [2], queue size [500]
[2018-10-26T21:16:10,560][DEBUG][o.e.t.ThreadPool ] [ops_es_data-prod-5-a] created thread pool: name [snapshot], core [1], max [1], keep alive [5m]
[2018-10-26T21:16:11,070][DEBUG][i.n.u.i.PlatformDependent0] -Dio.netty.noUnsafe: true
[2018-10-26T21:16:11,071][DEBUG][i.n.u.i.PlatformDependent0] sun.misc.Unsafe: unavailable (io.netty.noUnsafe)
[2018-10-26T21:16:11,073][DEBUG][i.n.u.i.PlatformDependent0] Java version: 10
[2018-10-26T21:16:11,073][DEBUG][i.n.u.i.PlatformDependent0] java.nio.DirectByteBuffer.(long, int): unavailable
[2018-10-26T21:16:11,074][DEBUG][i.n.u.i.PlatformDependent] maxDirectMemory: 5351276544 bytes (maybe)
[2018-10-26T21:16:11,075][DEBUG][i.n.u.i.PlatformDependent] -Dio.netty.tmpdir: /tmp/elasticsearch (java.io.tmpdir)
[2018-10-26T21:16:11,075][DEBUG][i.n.u.i.PlatformDependent] -Dio.netty.bitMode: 64 (sun.arch.data.model)
[2018-10-26T21:16:11,078][DEBUG][i.n.u.i.PlatformDependent] -Dio.netty.noPreferDirect: true
[2018-10-26T21:16:11,079][DEBUG][i.n.u.i.PlatformDependent] -Dio.netty.maxDirectMemory: -1 bytes
[2018-10-26T21:16:11,079][DEBUG][i.n.u.i.PlatformDependent] -Dio.netty.uninitializedArrayAllocationThreshold: -1
[2018-10-26T21:16:13,898][DEBUG][o.e.s.ScriptService ] [ops_es_data-prod-5-a] using script cache with max_size [100], expire [0s]
[2018-10-26T21:16:15,083][DEBUG][o.e.m.j.JvmGcMonitorService] [ops_es_data-prod-5-a] enabled [true], interval [1s], gc_threshold [{default=GcThreshold{name='default', warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}, young=GcThreshold{name='young', warnThreshold=1000, infoThreshold=700, debugThreshold=400}, old=GcThreshold{name='old', warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}}], overhead [50, 25, 10]
[2018-10-26T21:16:15,088][DEBUG][o.e.m.o.OsService ] [ops_es_data-prod-5-a] using refresh_interval [1s]
[2018-10-26T21:16:15,096][DEBUG][o.e.m.p.ProcessService ] [ops_es_data-prod-5-a] using refresh_interval [1s]
[2018-10-26T21:16:15,121][DEBUG][o.e.m.j.JvmService ] [ops_es_data-prod-5-a] using refresh_interval [1s]
[2018-10-26T21:16:15,122][DEBUG][o.e.m.f.FsService ] [ops_es_data-prod-5-a] using refresh_interval [1s]
[2018-10-26T21:16:15,132][DEBUG][o.e.c.r.a.d.ClusterRebalanceAllocationDecider] [ops_es_data-prod-5-a] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]
[2018-10-26T21:16:15,133][DEBUG][o.e.c.r.a.d.ConcurrentRebalanceAllocationDecider] [ops_es_data-prod-5-a] using [cluster_concurrent_rebalance] with [2]
[2018-10-26T21:16:15,154][DEBUG][o.e.c.r.a.d.ThrottlingAllocationDecider] [ops_es_data-prod-5-a] using node_concurrent_outgoing_recoveries [2], node_concurrent_incoming_recoveries [2], node_initial_primaries_recoveries [4]
[2018-10-26T21:16:16,214][DEBUG][o.e.i.IndicesQueryCache ] [ops_es_data-prod-5-a] using [node] query cache with size [510.3mb] max filter count [10000]
[2018-10-26T21:16:16,221][DEBUG][o.e.i.IndexingMemoryController] [ops_es_data-prod-5-a] using indexing buffer size [510.3mb] with indices.memory.shard_inactive_time [5m], indices.memory.interval [5s]
[2018-10-26T21:16:26,576][DEBUG][o.a.h.i.c.PoolingHttpClientConnectionManager] Closing expired connections
[2018-10-26T21:16:26,581][DEBUG][o.a.h.i.c.PoolingHttpClientConnectionManager] Closing connections idle longer than 10000 MILLISECONDS
[2018-10-26T21:16:36,582][DEBUG][o.a.h.i.c.PoolingHttpClientConnectionManager] Closing expired connections
[2018-10-26T21:16:36,582][DEBUG][o.a.h.i.c.PoolingHttpClientConnectionManager] Closing connections idle longer than 10000 MILLISECONDS
[2018-10-26T21:16:46,582][DEBUG][o.a.h.i.c.PoolingHttpClientConnectionManager] Closing expired connections
[2018-10-26T21:16:46,582][DEBUG][o.a.h.i.c.PoolingHttpClientConnectionManager] Closing connections idle longer than 10000 MILLISECONDS
[2018-10-26T21:16:56,582][DEBUG][o.a.h.i.c.PoolingHttpClientConnectionManager] Closing expired connections
[2018-10-26T21:16:56,583][DEBUG][o.a.h.i.c.PoolingHttpClientConnectionManager] Closing connections idle longer than 10000 MILLISECONDS
[2018-10-26T21:17:06,583][DEBUG][o.a.h.i.c.PoolingHttpClientConnectionManager] Closing expired connections
[2018-10-26T21:17:06,583][DEBUG][o.a.h.i.c.PoolingHttpClientConnectionManager] Closing connections idle longer than 10000 MILLISECONDS

Please turn off debug logging, it won't help here.

Please also provide your config and the version you are running.

Finally, use the </> button to format the logs/configs as code so it's easier to read.

ES version: 6..4.2
ES config:
cluster.name: live-es-cluster

node.name: "data-node-prod-5"

Allow this node to be eligible as a master node (enabled by default):

node.master: false

Allow this node to store data (enabled by default)

node.data: true

#################################### Paths ####################################

Path to directory where to store index data allocated for this node.

path.data:

  • /usr/share/elasticsearch/data

Path to log files:

path.logs: /var/log/elasticsearch

Path to directory where we store the backup

path.repo: /tmp/backup

bootstrap.memory_lock: true

Set the bind address specifically (IPv4 or IPv6):

network.bind_host: 0.0.0.0

Set the address other nodes will use to communicate with this node. If not

set, it is automatically derived. It must point to an actual IP address.

network.publish_host: 0.0.0.0

Set a custom port for the node to node communication (9300 by default):

transport.tcp.port: 9300

Enable compression for all communication between nodes (disabled by default):

transport.tcp.compress: true

Set a custom port to listen for HTTP traffic:

http.port: 9200

Set a custom allowed content length:

http.max_content_length: 200mb

Disable HTTP completely:

http.enabled: true

################################## Discovery ##################################

Discovery infrastructure ensures nodes can be found within a cluster

and master node is elected. Multicast discovery is the default.

Set to ensure a node sees N other master eligible nodes to be considered

operational within the cluster. This should be set to a quorum/majority of

the master-eligible nodes in the cluster.

minimum_master_nodes set to 2, to avoid split-brain

discovery.zen.minimum_master_nodes: 2

#discovery.zen.ping.timeout: 3s

discovery.zen.ping.unicast.hosts: [10.100.235.202,10.100.235.244,10.100.235.248]

xpack.security.enabled: false
xpack.monitoring.enabled: true

Threadpool Settings

thread_pool.index.queue_size: 100000
thread_pool.bulk.queue_size: 500
thread_pool.search.queue_size: 100000
thread_pool.listener.queue_size: 1000

Also if i set log level to ERROR i dont have log at all.
This was working fine until i upgraded ES docker images to 6.4.2 from 6.4.1 version and even rolling back to 6.4.1 gives connection refused now.

Please edit your post and use the </> button to format things.

Why are you trying to fiddle with the log level in the first place?
Elasticsearch ships with most logs set to INFO which is the recommended level, you don't need to change it.

If you are running docker, then you should say so from the start - it makes a significant different for diagnosing these sorts of issues.

What does your docker setup look like? Do you have your own Dockerfile?

How are you doing port mapping from the host to the container? From first glance that looks like it could be the problem.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.