Curl: (7) Failed connect to localhost:9200; Connection refused


(amilus) #1

Hello i tried to run elasticsearch but it doesn't seem to work a,d i don't know why, i haven't changed anything and it used to work. when i run curl -X GET 'http://localhost:9200' i get the error
curl: (7) Failed connect to localhost:9200; Connection refused.

here is my elasticsearch.yml file

#============Elassticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: localhost
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
http.cors.enabled : true

http.cors.allow-origin : "*"
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 3
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

i tried to re run the elasticsearch service but i didn't work either


(David Pilato) #2

What the elasticsearch logs are saying?


(amilus) #3
[2017-06-03T08:00:14,904][INFO ][o.e.c.r.a.DiskThresholdMonitor] [TKy9zA0] rerouting shards: [high disk watermark exceeded on one or more nodes]
[2017-06-03T08:00:44,914][WARN ][o.e.c.r.a.DiskThresholdMonitor] [TKy9zA0] high disk watermark [90%] exceeded on [TKy9zA01T0qNS3drXhQZMw][TKy9zA0][/var/lib/elasticsearch/nodes/0] free: 1.7gb[3.6%], shards will be relocated away from this node

(David Pilato) #4

Can I see the first 50 lines of your logs?


(amilus) #5

it is the same 2 lines i posted repeating all along


(amilus) #6

so i don't know what happened but curl -xget prints this resulat back
> {

          "name" : "TKy9zA0",
          "cluster_name" : "elasticsearch",
          "cluster_uuid" : "7guBcft8SP2CstDUgHTcsA",
          "version" : {
            "number" : "5.4.1",
            "build_hash" : "2cfe0df",
            "build_date" : "2017-05-29T16:05:51.443Z",
            "build_snapshot" : false,
            "lucene_version" : "6.5.1"
          },
          "tagline" : "You Know, for Search"
        }

but when i try to run logstash with the elasticsearch conf this errors comes out
[2017-06-03T09:15:53,222][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>4}
i'm lost


(David Pilato) #7

Can I see the first 50 lines of your logs please?


(amilus) #8
  [2017-06-03T00:03:53,918][INFO ][o.e.c.r.a.DiskThresholdMonitor] [TKy9zA0] rerouting shards: [high disk watermark exceeded on one or more nodes]
[2017-06-03T00:04:23,922][WARN ][o.e.c.r.a.DiskThresholdMonitor] [TKy9zA0] high disk watermark [90%] exceeded on [TKy9zA01T0qNS3drXhQZMw][TKy9zA0][/var/lib/elasticsearch/nodes/0] free: 1.7gb[3.6%], shards will be relocated away from this node
[2017-06-03T00:04:53,939][WARN ][o.e.c.r.a.DiskThresholdMonitor] [TKy9zA0] high disk watermark [90%] exceeded on [TKy9zA01T0qNS3drXhQZMw][TKy9zA0][/var/lib/elasticsearch/nodes/0] free: 1.7gb[3.6%], shards will be relocated away from this node
[2017-06-03T00:04:53,939][INFO ][o.e.c.r.a.DiskThresholdMonitor] [TKy9zA0] rerouting shards: [high disk watermark exceeded on one or more nodes]
[2017-06-03T00:05:23,943][WARN ][o.e.c.r.a.DiskThresholdMonitor] [TKy9zA0] high disk watermark [90%] exceeded on [TKy9zA01T0qNS3drXhQZMw][TKy9zA0][/var/lib/elasticsearch/nodes/0] free: 1.7gb[3.6%], shards will be relocated away from this node
[2017-06-03T00:05:53,951][WARN ][o.e.c.r.a.DiskThresholdMonitor] [TKy9zA0] high disk watermark [90%] exceeded on [TKy9zA01T0qNS3drXhQZMw][TKy9zA0][/var/lib/elasticsearch/nodes/0] free: 1.7gb[3.6%], shards will be relocated away from this node
[2017-06-03T00:05:53,951][INFO ][o.e.c.r.a.DiskThresholdMonitor] [TKy9zA0] rerouting shards: [high disk watermark exceeded on one or more nodes]
[2017-06-03T00:06:23,954][WARN ][o.e.c.r.a.DiskThresholdMonitor] [TKy9zA0] high disk watermark [90%] exceeded on [TKy9zA01T0qNS3drXhQZMw][TKy9zA0][/var/lib/elasticsearch/nodes/0] free: 1.7gb[3.6%], shards will be relocated away from this node
[2017-06-03T00:06:53,958][WARN ][o.e.c.r.a.DiskThresholdMonitor] [TKy9zA0] high disk watermark [90%] exceeded on [TKy9zA01T0qNS3drXhQZMw][TKy9zA0][/var/lib/elasticsearch/nodes/0] free: 1.7gb[3.6%], shards will be relocated away from this node
[2017-06-03T00:06:53,959][INFO ][o.e.c.r.a.DiskThresholdMonitor] [TKy9zA0] rerouting shards: [high disk watermark exceeded on one or more nodes]
[2017-06-03T00:07:23,962][WARN ][o.e.c.r.a.DiskThresholdMonitor] [TKy9zA0] high disk watermark [90%] exceeded on [TKy9zA01T0qNS3drXhQZMw][TKy9zA0][/var/lib/elasticsearch/nodes/0] free: 1.7gb[3.6%], shards will be relocated away from this node
[2017-06-03T00:07:53,970][WARN ][o.e.c.r.a.DiskThresholdMonitor] [TKy9zA0] high disk watermark [90%] exceeded on [TKy9zA01T0qNS3drXhQZMw][TKy9zA0][/var/lib/elasticsearch/nodes/0] free: 1.7gb[3.6%], shards will be relocated away from this node
[2017-06-03T00:07:53,970][INFO ][o.e.c.r.a.DiskThresholdMonitor] [TKy9zA0] rerouting shards: [high disk watermark exceeded on one or more nodes]
[2017-06-03T00:08:23,977][WARN ][o.e.c.r.a.DiskThresholdMonitor] [TKy9zA0] high disk watermark [90%] exceeded on [TKy9zA01T0qNS3drXhQZMw][TKy9zA0][/var/lib/elasticsearch/nodes/0] free: 1.7gb[3.6%], shards will be relocated away from this node
[2017-06-03T00:08:54,272][WARN ][o.e.c.r.a.DiskThresholdMonitor] [TKy9zA0] high disk watermark [90%] exceeded on [TKy9zA01T0qNS3drXhQZMw][TKy9zA0][/var/lib/elasticsearch/nodes/0] free: 1.7gb[3.6%], shards will be relocated away from this node
[2017-06-03T00:08:54,272][INFO ][o.e.c.r.a.DiskThresholdMonitor] [TKy9zA0] rerouting shards: [high disk watermark exceeded on one or more nodes]
[2017-06-03T00:09:34,588][WARN ][o.e.c.r.a.DiskThresholdMonitor] [TKy9zA0] high disk watermark [90%] exceeded on [TKy9zA01T0qNS3drXhQZMw][TKy9zA0][/var/lib/elasticsearch/nodes/0] free: 1.7gb[3.6%], shards will be relocated away from this node
[2017-06-03T00:10:10,552][WARN ][o.e.c.r.a.DiskThresholdMonitor] [TKy9zA0] high disk watermark [90%] exceeded on [TKy9zA01T0qNS3drXhQZMw][TKy9zA0][/var/lib/elasticsearch/nodes/0] free: 1.7gb[3.6%], shards will be relocated away from this node
[2017-06-03T00:10:10,552][INFO ][o.e.c.r.a.DiskThresholdMonitor] [TKy9zA0] rerouting shards: [high disk watermark exceeded on one or more nodes]
[2017-06-03T00:10:42,021][WARN ][o.e.c.r.a.DiskThresholdMonitor] [TKy9zA0] high disk watermark [90%] exceeded on [TKy9zA01T0qNS3drXhQZMw][TKy9zA0][/var/lib/elasticsearch/nodes/0] free: 1.7gb[3.6%], shards will be relocated away from this node
[2017-06-03T00:11:12,363][WARN ][o.e.c.r.a.DiskThresholdMonitor] [TKy9zA0] high disk watermark [90%] exceeded on [TKy9zA01T0qNS3drXhQZMw][TKy9zA0][/var/lib/elasticsearch/nodes/0] free: 1.7gb[3.6%], shards will be relocated away from this node
[2017-06-03T00:11:12,363][INFO ][o.e.c.r.a.DiskThresholdMonitor] [TKy9zA0] rerouting shards: [high disk watermark exceeded on one or more nodes]
[2017-06-03T00:11:06,311][WARN ][o.e.h.n.Netty4HttpServerTransport] [TKy9zA0] caught exception while handling client http traffic, closing connection [id: 0xf0164a1d, L:/127.0.0.1:9200 - R:/127.0.0.1:54292]
java.io.IOException: Connexion ré-initialisée par le correspondant
	at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[?:?]
	at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) ~[?:?]
	at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) ~[?:?]
	at sun.nio.ch.IOUtil.read(IOUtil.java:197) ~[?:?]
	at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) ~[?:?]
	at io.netty.buffer.PooledHeapByteBuf.setBytes(PooledHeapByteBuf.java:261) ~[netty-buffer-4.1.9.Final.jar:4.1.9.Final]
	at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1100) ~[netty-buffer-4.1.9.Final.jar:4.1.9.Final]
	at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:372) ~[netty-transport-4.1.9.Final.jar:4.1.9.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:123) [netty-transport-4.1.9.Final.jar:4.1.9.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:624) [netty-transport-4.1.9.Final.jar:4.1.9.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:524) [netty-transport-4.1.9.Final.jar:4.1.9.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:478) [netty-transport-4.1.9.Final.jar:4.1.9.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:438) [netty-transport-4.1.9.Final.jar:4.1.9.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.9.Final.jar:4.1.9.Final]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
[2017-06-03T00:11:43,571][WARN ][o.e.c.r.a.DiskThresholdMonitor] [TKy9zA0] high disk watermark [90%] exceeded on [TKy9zA01T0qNS3drXhQZMw][TKy9zA0][/var/lib/elasticsearch/nodes/0] free: 1.7gb[3.6%], shards will be relocated away from this node

(amilus) #9

i only could post these since the forum limits is 7000 characters


(David Pilato) #10

The start of the logs should show something like the cluster name, the version...
May be your logs have been rotated.

Can you stop elasticsearch, clean the logs and restart? Then share the logs?


(amilus) #11

i did as you said, this is what the log file contains now, i hope i did it right

   [2017-06-04T12:07:45,412][INFO ][o.e.n.Node               ] [] initializing ...
    [2017-06-04T12:07:46,783][INFO ][o.e.e.NodeEnvironment    ] [TKy9zA0] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [1.4gb], net total_space [47.4gb], spins? [unknown], types [rootfs]
    [2017-06-04T12:07:46,783][INFO ][o.e.e.NodeEnvironment    ] [TKy9zA0] heap size [1.9gb], compressed ordinary object pointers [true]
    [2017-06-04T12:07:47,407][INFO ][o.e.n.Node               ] node name [TKy9zA0] derived from node ID [TKy9zA01T0qNS3drXhQZMw]; set [node.name] to override
    [2017-06-04T12:07:47,408][INFO ][o.e.n.Node               ] version[5.4.1], pid[37936], build[2cfe0df/2017-05-29T16:05:51.443Z], OS[Linux/3.10.0-514.21.1.el7.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_131/25.131-b12]
    [2017-06-04T12:07:47,408][INFO ][o.e.n.Node               ] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+DisableExplicitGC, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/usr/share/elasticsearch]
    [2017-06-04T12:08:03,096][INFO ][o.e.p.PluginsService     ] [TKy9zA0] loaded module [aggs-matrix-stats]
    [2017-06-04T12:08:03,096][INFO ][o.e.p.PluginsService     ] [TKy9zA0] loaded module [ingest-common]
    [2017-06-04T12:08:03,097][INFO ][o.e.p.PluginsService     ] [TKy9zA0] loaded module [lang-expression]
    [2017-06-04T12:08:03,097][INFO ][o.e.p.PluginsService     ] [TKy9zA0] loaded module [lang-groovy]
    [2017-06-04T12:08:03,097][INFO ][o.e.p.PluginsService     ] [TKy9zA0] loaded module [lang-mustache]
    [2017-06-04T12:08:03,097][INFO ][o.e.p.PluginsService     ] [TKy9zA0] loaded module [lang-painless]
    [2017-06-04T12:08:03,097][INFO ][o.e.p.PluginsService     ] [TKy9zA0] loaded module [percolator]
    [2017-06-04T12:08:03,097][INFO ][o.e.p.PluginsService     ] [TKy9zA0] loaded module [reindex]
    [2017-06-04T12:08:03,097][INFO ][o.e.p.PluginsService     ] [TKy9zA0] loaded module [transport-netty3]
    [2017-06-04T12:08:03,098][INFO ][o.e.p.PluginsService     ] [TKy9zA0] loaded module [transport-netty4]
    [2017-06-04T12:08:03,100][INFO ][o.e.p.PluginsService     ] [TKy9zA0] no plugins loaded
    [2017-06-04T12:08:50,294][INFO ][o.e.d.DiscoveryModule    ] [TKy9zA0] using discovery type [zen]
    [2017-06-04T12:09:33,396][INFO ][o.e.n.Node               ] initialized
    [2017-06-04T12:09:33,424][INFO ][o.e.n.Node               ] [TKy9zA0] starting ...
    [2017-06-04T12:09:42,813][INFO ][o.e.t.TransportService   ] [TKy9zA0] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}
    [2017-06-04T12:09:50,902][INFO ][o.e.c.s.ClusterService   ] [TKy9zA0] new_master {TKy9zA0}{TKy9zA01T0qNS3drXhQZMw}{jKB6ZVldQc-tpT3UYKdsUQ}{localhost}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
    [2017-06-04T12:09:56,922][INFO ][o.e.h.n.Netty4HttpServerTransport] [TKy9zA0] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}
    [2017-06-04T12:09:57,406][INFO ][o.e.n.Node               ] [TKy9zA0] started
    [2017-06-04T12:10:47,179][WARN ][o.e.m.j.JvmGcMonitorService] [TKy9zA0] [gc][young][54][4] duration [1.5s], collections [1]/[4.1s], total [1.5s]/[1.7s], memory [157.5mb]->[43.3mb]/[1.9gb], all_pools {[young] [133.1mb]->[4.7mb]/[133.1mb]}{[survivor] [16.6mb]->[16.6mb]/[16.6mb]}{[old] [7.7mb]->[21.9mb]/[1.8gb]}
    [2017-06-04T12:10:47,343][INFO ][o.e.m.j.JvmGcMonitorService] [TKy9zA0] [gc][54] overhead, spent [1.5s] collecting in the last [4.1s]

(David Pilato) #12

Can you curl 127.0.0.1:9200

?


(amilus) #13

that's what it returns
curl: (7) Failed connect to 127.0.0.1:9200; Connexion refusée


(David Pilato) #14

You probably have a firewall then.


(amilus) #15

thank you yes it was the firewall blocking elasticsearch


(system) #16

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.