Elasticsearch Kibana ECONNRESET/socket hang up

Hi,

I am seeing this page and I am not sure what causes it. Please let me know if you have any idea what can be the reason?

Kibana is pointing to the same machine that is hosted on.

cat /etc/kibana/kibana.yml 

server.host: 10.50.30.150
elasticsearch.url: http://10.50.30.150:9200

Installed versions:
rpm -aq elasticsearch kibana xpack
elasticsearch-6.1.2-1.noarch
kibana-6.1.2-1.x86_64

Here is some info about the node that hosted both kibana and elasticsearch.

# netstat -anp | awk '/tcp/ {print $6 $7}' | sort | uniq -c
      1 ESTABLISHED-
      1 ESTABLISHED1059/elasticsearch_
     28 ESTABLISHED11533/oauth2_proxy
      1 ESTABLISHED19069/sshd:
    190 ESTABLISHED22603/java
      6 ESTABLISHED22674/node
      4 ESTABLISHED22966/nginx:
      4 ESTABLISHED22967/nginx:
      5 ESTABLISHED22968/nginx:
      4 ESTABLISHED22969/nginx:
      2 ESTABLISHED563/node_exporter
      2 LISTEN1017/master
      1 LISTEN1059/elasticsearch_
      2 LISTEN1081/sshd
      1 LISTEN11533/oauth2_proxy
      2 LISTEN1/systemd
      4 LISTEN22603/java
      1 LISTEN22674/node
      1 LISTEN22965/nginx:
      1 LISTEN563/node_exporter
      3 LISTEN910/unbound
     58 TIME_WAIT-

Kibana log:
tailf /var/log/kibana/kibana.log

{"type":"log","@timestamp":"2018-04-29T08:33:30Z","tags":["error","elasticsearch","admin"],"pid":9897,"message":"Request complete with error\nGET http://10.50.30.150:9200/_nodes?filter_path=nodes.*.version%2Cnodes.*.http.publish_address%2Cnodes.*.ip => socket hang up"}
{"type":"log","@timestamp":"2018-04-29T08:33:30Z","tags":["status","plugin:xpack_main@6.1.2","error"],"pid":9897,"state":"red","message":"Status changed from green to red - socket hang up","prevState":"green","prevMsg":"Ready"}
{"type":"log","@timestamp":"2018-04-29T08:33:30Z","tags":["status","plugin:reporting@6.1.2","error"],"pid":9897,"state":"red","message":"Status changed from green to red - socket hang up","prevState":"green","prevMsg":"Ready"}
{"type":"log","@timestamp":"2018-04-29T08:33:30Z","tags":["status","plugin:searchprofiler@6.1.2","error"],"pid":9897,"state":"red","message":"Status changed from green to red - socket hang up","prevState":"green","prevMsg":"Ready"}
{"type":"log","@timestamp":"2018-04-29T08:33:30Z","tags":["status","plugin:tilemap@6.1.2","error"],"pid":9897,"state":"red","message":"Status changed from green to red - socket hang up","prevState":"green","prevMsg":"Ready"}
{"type":"log","@timestamp":"2018-04-29T08:33:30Z","tags":["status","plugin:logstash@6.1.2","error"],"pid":9897,"state":"red","message":"Status changed from green to red - socket hang up","prevState":"green","prevMsg":"Ready"}
{"type":"log","@timestamp":"2018-04-29T08:33:30Z","tags":["status","plugin:elasticsearch@6.1.2","error"],"pid":9897,"state":"red","message":"Status changed from red to red - Error: socket hang up","prevState":"red","prevMsg":{"code":"ECONNRESET"}}

Elasticsearch log:

tailf /var/log/elasticsearch/elasticsearch.log
[2018-04-29T06:26:56,271][INFO ][o.e.c.s.ClusterApplierService] [ip-10-50-30-150] detected_master {ip-10-50-40-180}{yYSAp14mTxmRqkoIhQWDcA}{--PfgRhQTj6GRF4JVG66wg}{10.50.40.180}{10.50.40.180:9300}, added {{ip-10-50-45-124}{76gR60gxSJCtiO69f5QITQ}{d6jF79r0TBa1yRIeQD8M8g}{10.50.45.124}{10.50.45.124:9300},{ip-10-50-40-233}{gWnBJuHxQT2J7C4OmrAgQQ}{K35r-ZkjQn-9iLWVgDsevA}{10.50.40.233}{10.50.40.233:9300},{ip-10-50-45-225}{hf9ZXod6SL27ZIFP5V0KCw}{BDiqgQA5SXSSOJAsdBjjaQ}{10.50.45.225}{10.50.45.225:9300},{ip-10-50-40-185}{1RpuRSi1QNmOf5X63ER6Sw}{Imexn5LDReyKrdSXHYy3_g}{10.50.40.185}{10.50.40.185:9300},{ip-10-50-40-180}{yYSAp14mTxmRqkoIhQWDcA}{--PfgRhQTj6GRF4JVG66wg}{10.50.40.180}{10.50.40.180:9300},{ip-10-50-30-72}{Bix5ETL4S5KNyBFR2LgeKQ}{arNwhKlRT-erxk1RQVAKdQ}{10.50.30.72}{10.50.30.72:9300},{ip-10-50-30-106}{zIOpN_3XTxaVtps4sRsrag}{2DppOBqXTP2LbyEW6KsFkQ}{10.50.30.106}{10.50.30.106:9300},}, reason: apply cluster state (from master [master {ip-10-50-40-180}{yYSAp14mTxmRqkoIhQWDcA}{--PfgRhQTj6GRF4JVG66wg}{10.50.40.180}{10.50.40.180:9300} committed version [19818]])
[2018-04-29T06:26:56,537][DEBUG][o.e.a.b.TransportBulkAction] [ip-10-50-30-150] failed to execute pipeline [xpack_monitoring_6] for document [.monitoring-es-6-2018.04.29/doc/null]
java.lang.IllegalArgumentException: pipeline with id [xpack_monitoring_6] does not exist
	at org.elasticsearch.ingest.PipelineExecutionService.getPipeline(PipelineExecutionService.java:194) ~[elasticsearch-6.1.2.jar:6.1.2]
	at org.elasticsearch.ingest.PipelineExecutionService.access$100(PipelineExecutionService.java:42) ~[elasticsearch-6.1.2.jar:6.1.2]
	at org.elasticsearch.ingest.PipelineExecutionService$2.doRun(PipelineExecutionService.java:94) [elasticsearch-6.1.2.jar:6.1.2]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:637) [elasticsearch-6.1.2.jar:6.1.2]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.1.2.jar:6.1.2]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641) [?:?]
	at java.lang.Thread.run(Thread.java:844) [?:?]
[2018-04-29T06:26:57,038][INFO ][o.e.l.LicenseService     ] [ip-10-50-30-150] license [927ef48c-0c1a-491b-b514-f17bb3cdebf7] mode [basic] - valid
[2018-04-29T06:26:57,075][INFO ][o.e.h.n.Netty4HttpServerTransport] [ip-10-50-30-150] publish_address {10.50.30.150:9200}, bound_addresses {10.50.30.150:9200}, {[fe80::5c:6aff:fe2b:101e]:9200}
[2018-04-29T06:26:57,075][INFO ][o.e.n.Node               ] [ip-10-50-30-150] started

Let me know if you need any other info.
Thank you.

Looks like due to the pipeline error xpack did not load properly, try restarting ES and check is xpack is loaded properly. Also do you see this error on the other ES nodes aswell?

Thank you for replying @pjanzen.

I restart ES and still I see the same error and yes I see this xpack error on both of my ingest nodes.

Kibana is pointing to these ingest nodes.

Please let me know what you suggest.

In all my experiences so far the pipline errors where due to configuration errors. Usually in the logstash filters. Logstash started but could never create a pipeline which in turn caused errors on the ES machines.

  1. Do you use logstash? and if so, have you changed anything there recently?
  2. What did you do prior to discovering this error? What made you look the the logs?

Yes, I am using logstash and we had no change there as far as I am concern.
After seeing this error Elasticsearch ECONNRESET I checked the logs.

Even after I put the kibana on another node and pointing it another ES node in the cluster, still I saw the same error. :frowning:

pff this doesn's make any easier :slight_smile:

Gut feeling, I would first restart logstash on all hosts, that will re-initiate all pipeline toward ES, if the error persists I would restart ES. Other then that I am afraid I cannot help you further..

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.