Kibana 5.0.1 Request Timeout after 3000ms

Hello everyone,

I'm new here, and basically a newbie in ELK and many other things.
I've been asked to prototype ELK to process logs generated by the application we are developing and being able to analyze them easily.

First, I have installed filebeat, logstash, elasticsearch and kibana v5.0.1 on a CentOS 7.2 virtual machine on VMware, and i got everything working. Then, as in our production environment, Kibana will be installed on a different machine, I decided to try to connect a Kibana 5.0.1 on my host machine (Windows 7) to connect to Elasticsearch running on the VM. I changed the kibana.yml with "elasticsearch.url: "http://VM_IP_address:9200"" and when I launched Kibana on Windows i got the following error :

log [07:44:58.682] [error][status][plugin:elasticsearch@5.0.1] Status changed from yellow to red - Request Timeout after 3000ms
log [07:44:58.685] [error][status][ui settings] Status changed from yellow to red - Elasticsearch plugin is red

I can ping the VM_IP_address from the Windows host, I disabled the kibana service on my VM, deleted the existing .kibana index from elasticsearch (mistake ?) and rebooted the VM. Without any change in the error message I get.

I can't figure out if the problem is a network issue or if I should modify something in my ELK configuration.
So I tried many different things in both ways and nothing worked. Right now, the configuration of elasticsearch.yml is the default one.

Could you please please help me solve this problem. I've been stuck for two days trying everything I could find on forums but nothing seems to work :worried: Please, Elastic community, you are my only hope !

If you need any other information I could give you that would help you help me, I'd be happy to oblige.

by default elasticsearch binds to 127.0.0.1 and will only be accessible from localhost.

You will need to update your elasticsearch.yml and set network.host to 0.0.0.0 or to your network adapters ip address.

let me know if this helps,

regards, Peter

Hello Peter,

Thank you for your help. When I try setting network.host to 0.0.0.0, and restarting elasticsearch, it gives me this error :

...
[2016-11-24 14:52:59,688][INFO ][node ] [AdXhNzS] initialized
[2016-11-24 14:52:59,688][INFO ][node ] [AdXhNzS] starting ...
[2016-11-24 14:52:59,974][INFO ][transport ] [AdXhNzS] publish_address {192.168.121.130:9300}, bound_addresses {[::]:9300}
[2016-11-24 14:52:59,978][INFO ][bootstrap ] [AdXhNzS] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
[2016-11-24 14:52:59,980][ERROR][bootstrap ] [AdXhNzS] Exception
java.lang.RuntimeException: bootstrap checks failed
initial heap size [268435456] not equal to maximum heap size [2147483648]; this can cause resize pauses and prevents mlockall from locking the entire heap
please set [discovery.zen.minimum_master_nodes] to a majority of the number of master eligible nodes in your cluster
at org.elasticsearch.bootstrap.BootstrapCheck.check(BootstrapCheck.java:132)
...

Then, it closes by itself :sob:

Same error if I try my VM ip address. I can still ping this address from the Windows 7 host.

in your jvm.options initial and maximu heap size are not set to the same value it seems.

Hello,
I set the max and initial heap size to 1g (my VM is set to have a 2gb RAM).

Then I tried to set network.host to 0.0.0.0 and then to my VM IP address (the one I can ping from Windows), but I still have the same error when I start Kibana on Windows :sob:

could you paste your jvm.options configuration ?

Yes, of course :slight_smile: :

Here is my jvm.options file:

JVM configuration

################################################################

IMPORTANT: JVM heap size

################################################################

################################################################

You should always set the min and max JVM heap

size to the same value. For example, to set

the heap to 4 GB, set:

-Xms4g

-Xmx4g

See https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html

for more information

################################################################

Xms represents the initial size of total heap space

Xmx represents the maximum size of total heap space

-Xms1g
-Xmx1g

################################################################

Expert settings

################################################################

All settings below this section are considered

expert settings. Don't tamper with them unless

you understand what you are doing

################################################################

GC configuration

-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly

optimizations

disable calls to System#gc

-XX:+DisableExplicitGC

pre-touch memory pages used by the JVM during initialization

-XX:+AlwaysPreTouch

basic

force the server VM

-server

set to headless, just in case

-Djava.awt.headless=true

ensure UTF-8 encoding by default (e.g. filenames)

-Dfile.encoding=UTF-8

use our provided JNA always versus the system one

-Djna.nosys=true

flag to explicitly tell Netty to not use unsafe

-Dio.netty.noUnsafe=true

heap dumps

generate a heap dump when an allocation from the Java heap fails

heap dumps are created in the working directory of the JVM

-XX:+HeapDumpOnOutOfMemoryError

specify an alternative path for heap dumps

ensure the directory exists and has sufficient space

#-XX:HeapDumpPath=${heap.dump.path}

GC logging

#-XX:+PrintGCDetails
#-XX:+PrintGCTimeStamps
#-XX:+PrintGCDateStamps
#-XX:+PrintClassHistogram
#-XX:+PrintTenuringDistribution
#-XX:+PrintGCApplicationStoppedTime

log GC status to a file with time stamps

ensure the directory exists

#-Xloggc:${loggc}

Elasticsearch 5.0.0 will throw an exception on unquoted field names in JSON.

If documents were already indexed with unquoted fields in a previous version

of Elasticsearch, some operations may throw errors.

WARNING: This option will be removed in Elasticsearch 6.0.0 and is provided

only for migration purposes.

#-Delasticsearch.json.allow_unquoted_field_names=true

Thank you again for your time and your help Peter :slight_smile:

This seems correct. Can you check your elasticsearch output if another bootstrap check is failing ?

I have no failure regarding the bootstrap :
[2016-12-01 11:19:28,662][INFO ][bootstrap ] [AdXhNzS] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks

and then :

[2016-12-01 11:19:59,074][INFO ][http ] [AdXhNzS] publish_address {192.168.121.130:9200}, bound_addresses {192.168.121.130:9200}
[2016-12-01 11:19:59,075][INFO ][node ] [AdXhNzS] started
[2016-12-01 11:47:06,539][WARN ][monitor.jvm ] [AdXhNzS] [gc][young][1568][9] duration [4.8s], collections [1]/[4.8s], total [4.8s]/[5.1s], memory [91.6mb]->[24.3mb]/[1015.6mb], all_pools {[young] [66.5mb]->[262.7kb]/[66.5mb]}{[survivor] [2.5mb]->[1.6mb]/[8.3mb]}{[old] [22.4mb]->[22.4mb]/[940.8mb]}
[2016-12-01 11:47:10,899][WARN ][monitor.jvm ] [AdXhNzS] [gc][1568] overhead, spent [4.8s] collecting in the last [4.8s]
~

it seems elasticsearch is starting correctly now ...
can you try to ping 192.168.121.130 from your kibana machine or access 192.168.121.130:9200 from your browser (on the kibana machine)

also on elasticsearch machine you can
netstat -natp to see if something is actually listening on 9200

I can ping this ip address from the Windows host machine with no problem.
When i try accessing 192.168.121.130:9200 from a browser it goes to timeout and doesn't show anything.

On the Elasticsearch machine, the netstat -natp command gives me this :

tcp6 0 0 192.168.121.130:9200 :::* LISTEN 6799/java
tcp6 0 0 192.168.121.130:9300 :::* LISTEN 6799/java

In my kibana.yml file on the Windows machine, the only line that is not commented is the following :

The URL of the Elasticsearch instance to use for all your queries.

elasticsearch.url: "http://192.168.121.130:9200"

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.