RdrgPorto
(Rodrigo Porto)
February 5, 2019, 7:44am
1
Hi, everyone
I would like to know if it is possible to set a limit on Elasticsearch or Kibana to avoid a service down.
I have seen that Kibana has the parameter elasticsearch.requestTimeout: 30000
.
For instance, if one makes a 60-day query, Elasticsearch rejects the query after thirty seconds ?
Thanks in advance ,
Rodrigo
warkolm
(Mark Walkom)
February 5, 2019, 8:29am
2
There are a number of built-in protections already.
Are you having problems you're trying to mitigate?
1 Like
RdrgPorto
(Rodrigo Porto)
February 5, 2019, 9:27am
3
Hi @warkolm ,
I have two weekly index, (1 shard , 1 replica and 35 GB with 45M documents each one). Moreover, I have three nodes with 5 GB of JVM and 10 cores . When I make a search on Discover in order to get data of 7 days , I get Kibana timeout (I guess Elasticsearch use circuit breaker in order to avoid a service down, however, sometimes it is not enough).
Is there any way to improve this behavior? Increase JVM? CPU cores?
Thanks in advance,
Rodrigo
warkolm
(Mark Walkom)
February 5, 2019, 9:34am
4
What do your Elasticsearch logs show at that time?
What type of storage do you have? SSDs?
RdrgPorto
(Rodrigo Porto)
February 5, 2019, 9:57am
6
Hi, @warkolm and @Christian_Dahlqvist
Elasticsearch's logs don't show nothing revealing. I don't see anything about GC or Java Heap Space .
Regarding disk , it is SATA and I use two path data .
I attached a screenshot about my Elasticsearch cluster :
Thanks in advance ,
Regards,
Rodrigo
Look at disk I/O and iowait, e.g. using iostat
.
RdrgPorto
(Rodrigo Porto)
February 5, 2019, 10:19am
8
Hi, @Christian_Dahlqvist
Linux 4.4.0-141-generic (elastic-01-0) 02/05/2019 _x86_64_ (10 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
2.84 0.01 1.30 1.48 0.00 94.37
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
loop0 0.00 0.00 0.00 5 0
fd0 0.00 0.00 0.00 16 0
sda 56.94 554.19 1174.96 1201532947 2547411670
dm-0 126.40 554.14 1174.92 1201424574 2547324460
dm-1 0.00 0.00 0.01 3434 29432
Linux 4.4.0-141-generic (elastic-02-0) 02/05/2019 _x86_64_ (10 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
2.82 0.01 1.30 1.30 0.00 94.58
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
loop0 0.00 0.00 0.00 8 0
fd0 0.00 0.00 0.00 16 0
sda 41.80 168.97 1112.70 366281546 2411984297
dm-0 102.25 168.92 1112.66 366175043 2411901384
Linux 4.4.0-141-generic (elastic-03-0) 02/05/2019 _x86_64_ (10 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
2.46 0.01 1.06 1.46 0.00 95.02
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
loop0 0.00 0.00 0.00 5 0
fd0 0.00 0.00 0.00 4 0
sda 43.79 59.90 1166.23 113017602 2200556587
dm-0 109.14 59.87 1166.23 112971385 2200555716
dm-1 0.00 0.00 0.00 6077 852
Thanks in advance,
Regards
Is that taken while running a long-running query?
gavenkoa
(Oleksandr Gavenko)
February 12, 2019, 2:59pm
11
@RdrgPorto We are experienced same problem with 30 sec limit in Kibana at about 60-80GB of indexes. And not we are at 150GB ))
My details:
I provide details below, straight to the point: I suspect that increased index count made impossible to perform some Kibana built-in queries.
ls
elasticsearch-5.6.14.deb kibana-5.6.14-amd64.deb
I suspect that it is impossible to query 144GB of indexes on HDD with 4GB ES heap + 3GB mmap RAM:
GET /_nodes/stats/jvm
"_nodes": {
"total": 1,
"successful": 1,
"failed": 0
},
...
"nodes": {
"Tko1n5etQrmMvEm5viBBew": {
"timestamp": 1549977948232,
"name": "prod-es-1β¦
I can tell you one trick that sometimes helps. If your search query contains very frequent stem it won't work on large data.
For example instead of "BLABLA has the problem"
to avoid 30000ms issue I search: "BLABLA * problem"
system
(system)
Closed
March 12, 2019, 2:59pm
12
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.