Visualize Request Timeout after 30000ms

I am getting Visualize Request Timeout after 30000ms while loading dashboard in kibana.I increase requestTimeout.

elasticsearch.requestTimeout: 200000
After this I am not getting error in kibana but my dashboard fails to load and I am getting attached error Google chrome ran out of memory while trying to display this webpage.


Please suggest solution for same.

Hi there, I have a few questions to try to help me understand the problem better:

  1. Which version of Kibana are you using?
  2. Could you copy/paste the URL of the dashboard you're trying to load?
  3. Could you take a look at this issue and tell me if it seems related?


Hi CJ,

Thanks for took a look at my issue.
1.I am using kibana-4.5.4-windows,elasticsearch-2.3.5 ,logstash-5.0.0 and filebeat-5.2.2-windows-x86_64.

3.I am looking in to issue which you attached.
You may not be able to open my dashboard Url because that is the client machine where i am running ELK and our machine needs to hook in to their environment.
Best regards,

1.I use filebeat to read 3(defaultTrace_.trc,applications_.log,security_00..log) different log file from 7 server.Volume of data is too high.
2.I am using elasticsearch to store data and parse data using logstash.
3.I have created 3(Application adoption,operation managemet,security audit) type of dashboard of each server and total 14 visualizations.
4.Each dashboard contain (4,5,5) visualizations respectively.
5.There are four sources (K8P,K2P,K2Q,K8Q).
6.K8P and K8Q's os is linux
7.K2P and K2Q rans on windows.
8.I have created separate index for K8P,K2P,K2Q,K8Q.
.trc,applications_.log this two files data collected in one common index.
.log this files data collected in separate index.
11.Same index creation pattern follows for all sources.
12.Volume of data from K8P source is too high because it contains 3 servers and severs contains multiple node so large data is collecting in to index.
13.Fromat of data is same in all files.
14.Sample data
#2.0#2017 03 05 08:12:31:974#0-500#Warning#/System/Security/Authentication#[HTTP Worker [@979253619],5,Dedicated_Application_Thread]#Plain##
User: N/A
IP Address:
Authentication Stack: ticket
Authentication Stack Properties:

Login Module Flag Initialize Login Commit Abort Details

  1. SUFFICIENT ok false true
    #1 trusteddn1 = CN=GTA,O=Proctor and Gamble,C=NA
    #2 trusteddn2 = CN=K2E,OU=J2EE
    #3 trusteddn3 = CN=K2X,OU=J2EE
    #4 trusteddn4 = OU=J2EE,CN=K2D
    #5 trusteddn5 = CN=KAQ
    #6 trusteddn6 = OU=J2EE,CN=GTD
    #7 trusteddn7 = CN=KAQ
    #8 trusteddn8 = CN=KAD
    #9 trustediss1 = CN=GTA,O=Proctor and Gamble,C=NA
    #10 trustediss2 = CN=K2E,OU=J2EE
    #11 trustediss3 = CN=K2X,OU=J2EE
    #12 trustediss4 = OU=J2EE,CN=K2D
    #13 trustediss5 = CN=KAQ
    #14 trustediss6 = OU=J2EE,CN=GTD
    #15 trustediss7 = CN=KAQ
    #16 trustediss8 = CN=KAD
    #17 trustedsys1 = GTA,400
    #18 trustedsys2 = K2E,000
    #19 trustedsys3 = K2X,000
    #20 trustedsys4 = K2D,000
    #21 trustedsys5 = KAQ,000
    #22 trustedsys6 = GTD,000
    #23 trustedsys7 = KAQ,400
    #24 trustedsys8 = KAD,400
    #25 = true
  2. OPTIONAL ok exception true Authorization header not found or malformed.
    #1 Mode = standalone
    #2 OutputUserNameFormat = user
    #3 ServiceName = HTTP/
  3. SUFFICIENT ok false true
    #1 = true
  4. OPTIONAL ok false true
    #1 ErrorWhenNoCreds = false
    #2 Mode = standalone
    #3 OutputUserNameFormat = user
  5. SUFFICIENT ok false true
    #1 = true
  6. REQUISITE ok false false
  7. OPTIONAL ok false true
    #1 = true
    No logon policy was applied#

15.I am trying to load dashboard for 1 month data.It will not load.
16.particularly large field name is ErrorMessage.It conatins too large data as shown in sample.
17.Total fields

It looks like your dashboard contains 2 visualisations showing unique (cardinality query) users over a month of data, which can be quite heavy. How much data do you have in the cluster? How many indices/shards? What is the specification of your Elasticsearch cluster? Does the dashboard work if you query a shorter time period?

Yes ,Dashboard works for 1 week data.

this is my yml file.

Then it looks like your Elasticsearch cluster is not powerful enough to serve that dashboard for such a longtime period with the current settings. You may want to scale up and/or out or potentially try to improve performance by optimising your sharding strategy.

Could please let me know what configuartion is required for sharding strategy.

That is why I asked these questions.

I tried to increase number of replicas in kibana but I lost all the indexes.I have to create new index.

I didn't specify anything explicitly .It's default one.

Why did you lose all the indices? What is the current status of the cluster? Still seeing the timeouts?

No,Now I didnt't see timeout but I am not able to load 1 week or 1 month data in dashboard.
I can fetch 15 mins data efficiently.
It give me waiting popup .and after that it says google chrome ran out of memory.
please let me know How I can load 1 month data quickly.


"cluster_name": "elasticsearch",
"status": "red",
"timed_out": false,
"number_of_nodes": 1,
"number_of_data_nodes": 1,
"active_primary_shards": 61,
"active_shards": 61,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 121,
"delayed_unassigned_shards": 0,
"number_of_pending_tasks": 0,
"number_of_in_flight_fetch": 0,
"task_max_waiting_in_queue_millis": 0,
"active_shards_percent_as_number": 33.51648351648351


health status index pri rep docs.count docs.deleted store.size
yellow open sec-k8p-no-index 5 1 52597 0 88.9mb 88.9mb
yellow open security-timeout-k8q 5 1 223688 0 174.7mb 174.7mb
yellow open sap-security-logs-k2q 5 1 61754 0 80.1mb 80.1mb
yellow open sap-trace-app-logs-k8q 5 1 71539 0 166.5mb 166.5mb
yellow open sap-security-logs-k2p 5 1 15522 0 21.6mb 21.6mb
red open sap-trace-app-logs-nprd-qa 5 2
yellow open sap-trace-app-logs-k8p 5 1 19087974 0 34.8gb 34.8gb
red open sap-trace-app-logs-prd 5 2
yellow open .kibana 1 1 81 3 117.1kb 117.1kb
red open sap-security-logs-nprd-qa 5 2
yellow open sap-security-logs-k8q 5 1 11471 0 20.5mb 20.5mb
yellow open sap-security-logs-k8p 5 1 12023344 0 12.1gb 12.1gb
red open sap-security-logs-prd 5 2
yellow open k8p-no-index 5 1 306750 0 501.8mb 501.8mb
yellow open timeout-k8p 5 1 181864 0 300.4mb 300.4mb
yellow open sap-trace-app-logs-k2q 5 1 37595 1 79.1mb 79.1mb
yellow open sap-trace-app-logs-k2p 5 1 279188 0 729.3mb 729.3mb `

Please suggest me some solution to load heavy data.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.