Set limits on Elasticsearch & Kibana


(Rodrigo Porto) #1

Hi, everyone :slight_smile:

I would like to know if it is possible to set a limit on Elasticsearch or Kibana to avoid a service down.

I have seen that Kibana has the parameter elasticsearch.requestTimeout: 30000.

For instance, if one makes a 60-day query, Elasticsearch rejects the query after thirty seconds ?

Thanks in advance :vulcan_salute: ,

Rodrigo


(Mark Walkom) #2

There are a number of built-in protections already.

Are you having problems you're trying to mitigate?


(Rodrigo Porto) #3

Hi @warkolm,

I have two weekly index, (1 shard, 1 replica and 35 GB with 45M documents each one). Moreover, I have three nodes with 5 GB of JVM and 10 cores. When I make a search on Discover in order to get data of 7 days, I get Kibana timeout (I guess Elasticsearch use circuit breaker in order to avoid a service down, however, sometimes it is not enough).

Is there any way to improve this behavior? Increase JVM? CPU cores?

Thanks in advance,

Rodrigo


(Mark Walkom) #4

What do your Elasticsearch logs show at that time?


(Christian Dahlqvist) #5

What type of storage do you have? SSDs?


(Rodrigo Porto) #6

Hi, @warkolm and @Christian_Dahlqvist

Elasticsearch's logs don't show nothing revealing. I don't see anything about GC or Java Heap Space.

Regarding disk, it is SATA and I use two path data.

I attached a screenshot about my Elasticsearch cluster:

Thanks in advance :slight_smile:,

Regards,

Rodrigo


(Christian Dahlqvist) #7

Look at disk I/O and iowait, e.g. using iostat.


(Rodrigo Porto) #8

Hi, @Christian_Dahlqvist

Linux 4.4.0-141-generic (elastic-01-0)     02/05/2019      _x86_64_        (10 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           2.84    0.01    1.30    1.48    0.00   94.37

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
loop0             0.00         0.00         0.00          5          0
fd0               0.00         0.00         0.00         16          0
sda              56.94       554.19      1174.96 1201532947 2547411670
dm-0            126.40       554.14      1174.92 1201424574 2547324460
dm-1              0.00         0.00         0.01       3434      29432


Linux 4.4.0-141-generic (elastic-02-0)     02/05/2019      _x86_64_        (10 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           2.82    0.01    1.30    1.30    0.00   94.58

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
loop0             0.00         0.00         0.00          8          0
fd0               0.00         0.00         0.00         16          0
sda              41.80       168.97      1112.70  366281546 2411984297
dm-0            102.25       168.92      1112.66  366175043 2411901384

Linux 4.4.0-141-generic (elastic-03-0)     02/05/2019      _x86_64_        (10 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           2.46    0.01    1.06    1.46    0.00   95.02

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
loop0             0.00         0.00         0.00          5          0
fd0               0.00         0.00         0.00          4          0
sda              43.79        59.90      1166.23  113017602 2200556587
dm-0            109.14        59.87      1166.23  112971385 2200555716
dm-1              0.00         0.00         0.00       6077        852

Thanks in advance,

Regards


(Christian Dahlqvist) #9

Is that taken while running a long-running query?


(Rodrigo Porto) #10

Hi, @Christian_Dahlqvist

Yes, that's it

Regards


(Oleksandr Gavenko) #11

@RdrgPorto We are experienced same problem with 30 sec limit in Kibana at about 60-80GB of indexes. And not we are at 150GB ))

My details:

I can tell you one trick that sometimes helps. If your search query contains very frequent stem it won't work on large data.

For example instead of "BLABLA has the problem"to avoid 30000ms issue I search: "BLABLA * problem"