[ES 6.2.4] - Request Timeout after 3000ms

I'm new on Elasticsearch and I'm trying to deploy a Elasticsearch + Filebeat + Kibana stack, but I'm facing this error one time after another. I've been reading and trying to follow up the instructions on related topics, but the results were the same.

Both versions (Kibana/ES) are docker images -oss:6.2.4 and are being deployed in a k8s cluster.

The configuration of ES is the default of the docker image, and the kibana one is the following:

environments variables:

            - name:          XPACK_MONITORING_ENABLED
              value:         "false"
            - name:          XPACK_SECURITY_ENABLED
              value:         "false"

kibana.yml:

        server.name:       logging
        server.host:       "0"
        elasticsearch.url: http://elasticsearch-logging

The ES service is exposed on the 80 port, binding the 9200 of the container. Both services are in the same namespace of the kubernetes cluster, so the name of the services is enough to connect to it.

Any ideas of what's happening with Kibana? Why can't connect, or receive response, from ES?

Thanks!

If you're using the -oss Docker image, you don't need to disable monitoring or security as they are not present.

From the Kibana container, can you ping elasticsearch-logging? What is in the Kibana/ES logs?

Ping resolves the name of the service as expected from Kibana:

bash-4.2$ ping elasticsearch-logging
PING elasticsearch-logging.latest.svc.cluster.local (100.68.201.39) 56(84) bytes of data.

Logs from Kibana when initializing (verbose mode):

....
{"type":"log","@timestamp":"2018-05-17T15:23:56Z","tags":["status","plugin:console@6.2.4","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2018-05-17T15:23:56Z","tags":["plugins","debug"],"pid":1,"plugin":{"author":"Chris Cowan<chris@elastic.co>","name":"metrics","version":"kibana"},"message":"Initializing plugin metrics@kibana"}
{"type":"log","@timestamp":"2018-05-17T15:23:56Z","tags":["status","plugin:metrics@6.2.4","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2018-05-17T15:23:56Z","tags":["plugins","debug"],"pid":1,"plugin":{"author":"Yuri Astrakhan<yuri@elastic.co>","name":"vega","version":"kibana"},"message":"Initializing plugin vega@kibana"}
{"type":"log","@timestamp":"2018-05-17T15:23:56Z","tags":["server","uuid","uuid"],"pid":1,"message":"Setting new Kibana instance UUID: 75d2fdeb-eb03-4236-b3fd-415ff5fb0281"}
{"type":"log","@timestamp":"2018-05-17T15:23:56Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0:5601"}
{"type":"ops","@timestamp":"2018-05-17T15:23:57Z","tags":[],"pid":1,"os":{"load":[1.55859375,0.4931640625,0.22900390625],"mem":{"total":3949867008,"free":1301180416},"uptime":2357017},"proc":{"uptime":12.959,"mem":{"rss":141119488,"heapTotal":115765248,"heapUsed":80114224,"external":306681},"delay":1.4610481262207031},"load":{"requests":{},"concurrents":{"5601":0},"responseTimes":{},"sockets":{"http":{"total":0},"https":{"total":0}}},"message":"memory: 76.4MB uptime: 0:00:13 load: [1.56 0.49 0.23] delay: 1.461"}
{"type":"log","@timestamp":"2018-05-17T15:23:59Z","tags":["status","plugin:elasticsearch@6.2.4","error"],"pid":1,"state":"red","message":"Status changed from yellow to red - Request Timeout after 3000ms","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}

Logs from ES:

...
[2018-05-17T09:58:30,451][INFO ][o.e.p.PluginsService     ] [_58ZD3l] loaded module [analysis-common]
[2018-05-17T09:58:30,451][INFO ][o.e.p.PluginsService     ] [_58ZD3l] loaded module [ingest-common]
[2018-05-17T09:58:30,452][INFO ][o.e.p.PluginsService     ] [_58ZD3l] loaded module [lang-expression]
[2018-05-17T09:58:30,452][INFO ][o.e.p.PluginsService     ] [_58ZD3l] loaded module [lang-mustache]
[2018-05-17T09:58:30,452][INFO ][o.e.p.PluginsService     ] [_58ZD3l] loaded module [lang-painless]
[2018-05-17T09:58:30,453][INFO ][o.e.p.PluginsService     ] [_58ZD3l] loaded module [mapper-extras]
[2018-05-17T09:58:30,453][INFO ][o.e.p.PluginsService     ] [_58ZD3l] loaded module [parent-join]
[2018-05-17T09:58:30,453][INFO ][o.e.p.PluginsService     ] [_58ZD3l] loaded module [percolator]
[2018-05-17T09:58:30,453][INFO ][o.e.p.PluginsService     ] [_58ZD3l] loaded module [rank-eval]
[2018-05-17T09:58:30,453][INFO ][o.e.p.PluginsService     ] [_58ZD3l] loaded module [reindex]
[2018-05-17T09:58:30,453][INFO ][o.e.p.PluginsService     ] [_58ZD3l] loaded module [repository-url]
[2018-05-17T09:58:30,453][INFO ][o.e.p.PluginsService     ] [_58ZD3l] loaded module [transport-netty4]
[2018-05-17T09:58:30,453][INFO ][o.e.p.PluginsService     ] [_58ZD3l] loaded module [tribe]
[2018-05-17T09:58:30,454][INFO ][o.e.p.PluginsService     ] [_58ZD3l] loaded plugin [ingest-geoip]
[2018-05-17T09:58:30,454][INFO ][o.e.p.PluginsService     ] [_58ZD3l] loaded plugin [ingest-user-agent]
[2018-05-17T09:58:45,645][INFO ][o.e.d.DiscoveryModule    ] [_58ZD3l] using discovery type [zen]
[2018-05-17T09:58:49,394][INFO ][o.e.n.Node               ] initialized
[2018-05-17T09:58:49,408][INFO ][o.e.n.Node               ] [_58ZD3l] starting ...
[2018-05-17T09:58:50,420][INFO ][o.e.t.TransportService   ] [_58ZD3l] publish_address {100.96.1.245:9300}, bound_addresses {[::]:9300}
[2018-05-17T09:58:50,476][INFO ][o.e.b.BootstrapChecks    ] [_58ZD3l] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2018-05-17T09:58:53,664][INFO ][o.e.c.s.MasterService    ] [_58ZD3l] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {_58ZD3l}{_58ZD3lbQA-m3z1tm5L2NQ}{0_tV69wfTNe7V0-qV9H-7A}{100.96.1.245}{100.96.1.245:9300}
[2018-05-17T09:58:53,684][INFO ][o.e.c.s.ClusterApplierService] [_58ZD3l] new_master {_58ZD3l}{_58ZD3lbQA-m3z1tm5L2NQ}{0_tV69wfTNe7V0-qV9H-7A}{100.96.1.245}{100.96.1.245:9300}, reason: apply cluster state (from master [master {_58ZD3l}{_58ZD3lbQA-m3z1tm5L2NQ}{0_tV69wfTNe7V0-qV9H-7A}{100.96.1.245}{100.96.1.245:9300} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2018-05-17T09:58:53,764][INFO ][o.e.h.n.Netty4HttpServerTransport] [_58ZD3l] publish_address {100.96.1.245:9200}, bound_addresses {[::]:9200}
[2018-05-17T09:58:53,764][INFO ][o.e.n.Node               ] [_58ZD3l] started
[2018-05-17T09:58:55,746][INFO ][o.e.g.GatewayService     ] [_58ZD3l] recovered [2] indices into cluster_state
[2018-05-17T09:58:57,411][INFO ][o.e.c.r.a.AllocationService] [_58ZD3l] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.monitoring-es-6-2018.05.16][0]] ...]).

Solved.

The problem was that if in the kibana.yml file, the variable elasticsearch.url doesn't have explicitly specified the port, it takes 9200 by default. Which, in my opinion is a mistake: if the port is not specified and the URL starts with http, should take port 80 by default (IMHO).

With this kibana.yml works:

        server.name:       logging
        server.host:       "0"
        elasticsearch.url: http://elasticsearch-logging:80/
3 Likes

Added an issue about this on Kibana's Github:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.