Error message for huge search in kibana : Data might be incomplete because your request timed out

Hi everyone,
I have setting up an ELK server, here it is my configuration :

  • LXC container under Proxmox 6.0-7
  • OS : Debian 10 Buster
  • Elasticsearch / logstash / kibana release 7.4.0
  • RAM : 64 Gb
  • JVM Heap ( elasticsearch and logstash jvm.option ) : 32 GB
  • docs : about 82 000 000 per day
  • 5 indices with 1 shard / 0 replicas per doc

When I want to search log in a time range which is more than 24H, I have this error message : Data might be incomplete because your request timed out
I try differents settings, like increase kibana timeout from 3000 to 120000 ms, but the time out error appears before 30 seconds so I don't thinks it's the problem.
I try to increase JVM ressources, elasticsearch use more RAM but I still have the same error if I try to look for more than 24 hours.

I'm looking for people who meet the same issues, but each time this is on old versions.

Is anyone know why this timeout happend ? May I have to add more ressource on my server ?

Thanks for your help. Regards.

Hi @vdsm,

What type of timeouts you've tried to configure? We have a bunch of different elasticsearch.* timeout settings in Kibana.

How is your request looks like? Do you use Kibana Query Language (KQL) or Lucene syntax? Have you tried issuing this request directly to Elasticsearch via Kibana Dev tools or curl?

Can you enable verbose logging in Kibana (logging.verbose: true) and share what suspicious you see in logs when this search request is executed.

Best,
Oleg

Hi @azasypkin,
Thanks for your reply.
I try to change time out request into kibana.yml file at the line :
elasticsearch.requestTimeout: 120000

Usualy, I use KQL syntax, which is the default syntax on my settings, but in this case I don't use a specific request, I just set a time range ( last 2 days or more ), I'm not trying to find somethings in particulary for the moment, because I don't have all docs.

If request elasticsearch by curl or by dev tools, it's looks like OK, I do not have any error message, and the return looks like to be OK :

`GET /index-name/_count`
    {
      "count" : 559250594,
      "_shards" : {
        "total" : 1,
        "successful" : 1,
        "skipped" : 0,
        "failed" : 0
      }
    }

I have absolutly not this count if I'm looking for all docs of the index "index-name" throught discover tab.

By the way, I just notice I don't have 3 last days if I search from now to 5 days ago :

I have enable verbose logging, woaw he is very chatty !
I seen few lines which can mayby help us :

{"type":"ops","@timestamp":"2019-10-15T14:30:10Z","tags":[],"pid":4264,"os":{"load":[3.09375,2.609375,2.7529296875],"mem":{"total":135053922304,"free":41916858368},"uptime":1293463},"proc":{"uptime":839.418,"mem":{"rss":452947968,"heapTotal":284925952,"heapUsed":227872224,"external":2880072},"delay":0.13965606689453125},"load":{"requests":{"5601":{"total":0,"disconnects":0,"statusCodes":{}}},"responseTimes":{"5601":{"avg":null,"max":0}},"sockets":{"http":{"total":0},"https":{"total":0}}},"message":"memory: 217.3MB uptime: 0:13:59 load: [3.09 2.61 2.75] delay: 0.140"}

{"type":"log","@timestamp":"2019-10-15T14:30:14Z","tags":["debug","monitoring","kibana-monitoring"],"pid":4264,"message":"Received Kibana Ops event data"}


{"type":"log","@timestamp":"2019-10-15T14:30:15Z","tags":["debug","stats-collection"],"pid":4264,"message":"not sending [kibana_settings] monitoring document because [undefined] is null or invalid."}


{"type":"log","@timestamp":"2019-10-15T14:30:15Z","tags":["debug","monitoring","kibana-monitoring"],"pid":4264,"message":"Resetting lastFetchWithUsage because uploading to the cluster was not successful."}

Just before error :

{"type":"ops","@timestamp":"2019-10-15T14:30:30Z","tags":[],"pid":4264,"os":{"load":[3.3212890625,2.6982421875,2.77978515625],"mem":{"total":135053922304,"free":41656528896},"uptime":1293483},"proc":{"uptime":859.418,"mem":{"rss":453877760,"heapTotal":286498816,"heapUsed":228930600,"external":2898297},"delay":0.1255340576171875},"load":{"requests":{"5601":{"total":0,"disconnects":0,"statusCodes":{}}},"responseTimes":{"5601":{"avg":null,"max":0}},"sockets":{"http":{"total":0},"https":{"total":0}}},"message":"memory: 218.3MB uptime: 0:14:19 load: [3.32 2.70 2.78] delay: 0.126"}
{"type":"log","@timestamp":"2019-10-15T14:30:31Z","tags":["plugin","debug"],"pid":4264,"message":"Checking Elasticsearch version"}
{"type":"response","@timestamp":"2019-10-15T14:29:59Z","tags":[],"pid":4264,"method":"post","statusCode":200,"req":{"url":"/elasticsearch/_msearch?rest_total_hits_as_int=true&ignore_throttled=true","method":"post","headers":{"host":"IP-ELK:5601","user-agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:69.0) Gecko/20100101 Firefox/69.0","accept":"application/json, text/plain, */*","accept-language":"fr,fr-FR;q=0.8,en-US;q=0.5,en;q=0.3","accept-encoding":"gzip, deflate","content-type":"application/x-ndjson","kbn-version":"7.4.0","content-length":"816","connection":"keep-alive","referer":"http://IP-ELK:5601/app/kibana"},"remoteAddress":"IP-browser","userAgent":"IP-browser","referer":"http://IP-ELK:5601/app/kibana"},"res":{"statusCode":200,"responseTime":32888,"contentLength":9},"message":"POST /elasticsearch/_msearch?rest_total_hits_as_int=true&ignore_throttled=true 200 32888ms - 9.0B"}

Do you see interesting things ???

Thanks for your help.

Hmm, nope, I expected to see log entries with error in "tags", debug won't be helpful in this case. Do you see any of those?

I have absolutly not this count if I'm looking for all docs of the index "index-name" throught discover tab.

Btw, can you try to decrease a value of discover:sampleSize Advanced Setting. Just to make sure you can get anything for that time range.

Best,
Oleg

Hi @azasypkin
I don't have any error in tags during my time out request, I just seen some "apollo-server-errors" sometimes.
I have decrease the value of discover:sampleSize in the Advances settings to 100 or 50, but still timeout if me request is above of 48 hours.

But I found somethinks very strange... If I request for last 24H or 48H, it's OK.

If I'm going above, for example 7 days, there is the error message and it display day 1, 2 and 3 but not above.
It's looks like it don't have enough time for displaying all data, even if the elasticsearch.requestTimeout is set to 120000.

About that, the default elasticsearch.requestTimeout is set to 30000ms, this is the exact time after I've got the error message... Is there another place I can check this parameters ?

I'm loosing my mind with that ! :sob:

Hi,
For information, I find whats could be the problem ...
I just increase the availible RAM of elasticsearch JVM ( /etc/elasticsearch/jvm.options ). By default, it's seting ip to 4Gb, which is absolutly not enough for requestion several billions of docs. I increase it to 48 Gb. Now, I can request for few days, if there is no more 7 days, I can finalise my request without errors. If it can help someone...

Thanks for the answers.

Regards.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.