Unable to connect to Elasticsearch. Error: Request Timeout after 30000ms

hi ,i have problem to connect kibana with elasticsearch.
my OS is Manjaro and it is Arch base.
i install the latest version of elastic & kibana (7.8.0) and try to handle errors, change port or delete kibana and install again.but that error makes me crazy!!!
at this moment in config files of kibana i just change :
elasticsearch.hosts: ["http://localhost:9200"]
but this error exist yet :
log [10:16:23.513] [warning][savedobjects-service] Unable to connect to Elasticsearch. Error: Request Timeout after 30000ms

In your case, it looks like Kibana might have tried to create an index, and that request might have succeeded, but because of the timeout, Kibana didn't know this succeeded. So the node might have fallen back into a polling loop, waiting for another instance to complete it, but there was no other instance busy with the migration.

Deleting all the Kibana indices like you did will allow Kibana to attempt the migration again

curl -XDELETE http://localhost:9200/.kibana*

If the same error happens over and over, it might be that something in your ES cluster is preventing Kibana from creating a new index within 30s. Can you share your ES and Kibana logs after deleting all the .kibana* indices? When multiple Kibana's are started at the same time they will all try to create this index, but some will fail (because the other instance already created that index). If this happens the node will go into a polling loop waiting for the other node to finish the migration.

Hope this helps
Rashmi

1 Like

Thanks for your help :))
I run curl -XDELETE http://localhost:9200/.kibana* command and have this error :
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}%
then run bin/kibana and have previously error :
log [10:08:25.226] [warning][savedobjects-service] Unable to connect to Elasticsearch. Error: Request Timeout after 30000ms
what should i do ??

Which Elasticsearch version are you using? Look in the Elasticsearch logs, which should be located under /data/logs/elasticsearch based on your config above. Is there anything related to zen discovery there? Is there any firewall rules that prevent the nodes from connecting? Is there anything in the Elasticsearch logs?

My Elasticsearch version is the lastest (7.8.0)
And some parts of logs in that file :
[2020-07-31T19:47:42,525][INFO ][o.e.n.Node ] [str-pc] version[7.8.0], pid[22773], build[default/tar/757314695644ea9a1dc2fecd26d1a43856725e65/2020-06-14T19:35:50.234439Z], OS[Linux/5.4.52-1-MANJARO/amd64], JVM[AdoptOpenJDK/OpenJDK 64-Bit Server VM/14.0.1/14.0.1+7] [2020-07-31T19:47:42,563][INFO ][o.e.n.Node ] [str-pc] JVM home [/home/str/elasticsearch-7.8.0/jdk] [2020-07-31T19:47:42,563][INFO ][o.e.n.Node ] [str-pc] JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -XX:+ShowCodeDetailsInExceptionMessages, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT, -Xms1g, -Xmx1g, -XX:+UseG1GC, -XX:G1ReservePercent=25, -XX:InitiatingHeapOccupancyPercent=30, -Djava.io.tmpdir=/tmp/elasticsearch-5011581274156919851, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -XX:MaxDirectMemorySize=536870912, -Des.path.home=/home/str/elasticsearch-7.8.0, -Des.path.conf=/home/str/elasticsearch-7.8.0/config, -Des.distribution.flavor=default, -Des.distribution.type=tar, -Des.bundled_jdk=true] [2020-07-31T19:47:49,543][INFO ][o.e.p.PluginsService ] [str-pc] loaded module [aggs-matrix-stats] ..... at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:325) [elasticsearch-7.8.0.jar:7.8.0] at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:252) [elasticsearch-7.8.0.jar:7.8.0] at org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:598) [elasticsearch-7.8.0.jar:7.8.0] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:636) [elasticsearch-7.8.0.jar:7.8.0] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?] at java.lang.Thread.run(Thread.java:832) [?:?] [2020-07-31T20:29:53,785][WARN ][o.e.c.c.ClusterFormationFailureHelper] [str-pc] master not discovered or elected yet, an election requires a node with id [FI4Pa31JQqqkgQ9TGmMI6Q], have discovered [{str-pc}{OCmNkUUaQJmL97FkYTEvww}{SBpbkeLcRw6qVFkF3B1bQw}{127.0.0.1}{127.0.0.1:9300}{dilmrt}{ml.machine_memory=8203636736, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}] which is not a quorum; discovery will continue using [127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305, [::1]:9301, [::1]:9302, [::1]:9303, [::1]:9304, [::1]:9305] from hosts providers and [{str-pc}{OCmNkUUaQJmL97FkYTEvww}{SBpbkeLcRw6qVFkF3B1bQw}{127.0.0.1}{127.0.0.1:9300}{dilmrt}{ml.machine_memory=8203636736, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}] from last-known cluster state; node term 22, last-accepted version 55 in term 22