Elasticsearch, Kibana ECONNRESET, socket hang up

Hello everyone,
I would be great if you help me with this issue.

Kibana was working fine till last week but recently I am getting this error on my Kibana web interface.

After restarting elasticsearch on the two nodes that kibana is hosted on; this error goes away for a short period of time and then came back! So to me this issue is not related to the kibana and somehow it can be related to the elasticsearch.

Kibana is hosted on the same two nodes that ingest nodes are hosted on, m4.xlarge with 16GB of memory and jvm size of 8GB.

Kibana log files:

tailf /var/log/kibana/kibana.log

{"type":"log","@timestamp":"2018-04-29T08:33:30Z","tags":["error","elasticsearch","admin"],"pid":9897,"message":"Request complete with error\nGET http://10.50.30.150:9200/_nodes?filter_path=nodes.*.version%2Cnodes.*.http.publish_address%2Cnodes.*.ip => socket hang up"}
{"type":"log","@timestamp":"2018-04-29T08:33:30Z","tags":["status","plugin:xpack_main@6.1.2","error"],"pid":9897,"state":"red","message":"Status changed from green to red - socket hang up","prevState":"green","prevMsg":"Ready"}
{"type":"log","@timestamp":"2018-04-29T08:33:30Z","tags":["status","plugin:reporting@6.1.2","error"],"pid":9897,"state":"red","message":"Status changed from green to red - socket hang up","prevState":"green","prevMsg":"Ready"}
{"type":"log","@timestamp":"2018-04-29T08:33:30Z","tags":["status","plugin:searchprofiler@6.1.2","error"],"pid":9897,"state":"red","message":"Status changed from green to red - socket hang up","prevState":"green","prevMsg":"Ready"}
{"type":"log","@timestamp":"2018-04-29T08:33:30Z","tags":["status","plugin:tilemap@6.1.2","error"],"pid":9897,"state":"red","message":"Status changed from green to red - socket hang up","prevState":"green","prevMsg":"Ready"}
{"type":"log","@timestamp":"2018-04-29T08:33:30Z","tags":["status","plugin:logstash@6.1.2","error"],"pid":9897,"state":"red","message":"Status changed from green to red - socket hang up","prevState":"green","prevMsg":"Ready"}
{"type":"log","@timestamp":"2018-04-29T08:33:30Z","tags":["status","plugin:elasticsearch@6.1.2","error"],"pid":9897,"state":"red","message":"Status changed from red to red - Error: socket hang up","prevState":"red","prevMsg":{"code":"ECONNRESET"}}

Also, I noticed that x-pack monitoring section also now does work properly. Sometimes works fine and sometimes it does not.


my elasticsearch logs.

tailf /var/log/elasticsearch/elasticsearch.log
[2018-04-29T06:26:56,271][INFO ][o.e.c.s.ClusterApplierService] [ip-10-50-30-150] detected_master {ip-10-50-40-180}{yYSAp14mTxmRqkoIhQWDcA}{--PfgRhQTj6GRF4JVG66wg}{10.50.40.180}{10.50.40.180:9300}, added {{ip-10-50-45-124}{76gR60gxSJCtiO69f5QITQ}{d6jF79r0TBa1yRIeQD8M8g}{10.50.45.124}{10.50.45.124:9300},{ip-10-50-40-233}{gWnBJuHxQT2J7C4OmrAgQQ}{K35r-ZkjQn-9iLWVgDsevA}{10.50.40.233}{10.50.40.233:9300},{ip-10-50-45-225}{hf9ZXod6SL27ZIFP5V0KCw}{BDiqgQA5SXSSOJAsdBjjaQ}{10.50.45.225}{10.50.45.225:9300},{ip-10-50-40-185}{1RpuRSi1QNmOf5X63ER6Sw}{Imexn5LDReyKrdSXHYy3_g}{10.50.40.185}{10.50.40.185:9300},{ip-10-50-40-180}{yYSAp14mTxmRqkoIhQWDcA}{--PfgRhQTj6GRF4JVG66wg}{10.50.40.180}{10.50.40.180:9300},{ip-10-50-30-72}{Bix5ETL4S5KNyBFR2LgeKQ}{arNwhKlRT-erxk1RQVAKdQ}{10.50.30.72}{10.50.30.72:9300},{ip-10-50-30-106}{zIOpN_3XTxaVtps4sRsrag}{2DppOBqXTP2LbyEW6KsFkQ}{10.50.30.106}{10.50.30.106:9300},}, reason: apply cluster state (from master [master {ip-10-50-40-180}{yYSAp14mTxmRqkoIhQWDcA}{--PfgRhQTj6GRF4JVG66wg}{10.50.40.180}{10.50.40.180:9300} committed version [19818]])
[2018-04-29T06:26:56,537][DEBUG][o.e.a.b.TransportBulkAction] [ip-10-50-30-150] failed to execute pipeline [xpack_monitoring_6] for document [.monitoring-es-6-2018.04.29/doc/null]
java.lang.IllegalArgumentException: pipeline with id [xpack_monitoring_6] does not exist
	at org.elasticsearch.ingest.PipelineExecutionService.getPipeline(PipelineExecutionService.java:194) ~[elasticsearch-6.1.2.jar:6.1.2]
	at org.elasticsearch.ingest.PipelineExecutionService.access$100(PipelineExecutionService.java:42) ~[elasticsearch-6.1.2.jar:6.1.2]
	at org.elasticsearch.ingest.PipelineExecutionService$2.doRun(PipelineExecutionService.java:94) [elasticsearch-6.1.2.jar:6.1.2]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:637) [elasticsearch-6.1.2.jar:6.1.2]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.1.2.jar:6.1.2]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641) [?:?]
	at java.lang.Thread.run(Thread.java:844) [?:?]
[2018-04-29T06:26:57,038][INFO ][o.e.l.LicenseService     ] [ip-10-50-30-150] license [927ef48c-0c1a-491b-b514-f17bb3cdebf7] mode [basic] - valid
[2018-04-29T06:26:57,075][INFO ][o.e.h.n.Netty4HttpServerTransport] [ip-10-50-30-150] publish_address {10.50.30.150:9200}, bound_addresses {10.50.30.150:9200}, {[fe80::5c:6aff:fe2b:101e]:9200}
[2018-04-29T06:26:57,075][INFO ][o.e.n.Node               ] [ip-10-50-30-150] started

Looking forward for your opinion and answers.
Thank you.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.