Does Elasticsearch go to sleep when there are no requests for a couple of hours?

My development sever is a windows machine, and I leave Elasticsearch running at the end of the day... I have noticed that, every time I start working the next day, the first search request from Elasticsearch does not return any result... but if I make the same request a second time, then it would return some result.

In fact I have added a logic to my code, to test Elasticsearch result, if it's empty, my code will wait for 500ms, and re-sends the same request. This will only happen when Elasticsearch have been inactive for a couple of hours.

Is this normal? Is there any server configuration that could prevent this from happening?

It does not, no. That seems pretty abnormal.

What version are you on?

@warkolm: thanks. The Dev machine where this happens is 6.2.3 (Windows)

I have not noticed this problem on Test/Prod which are 6.8 (Linux), but it's less likely that those servers are inactive for a couple of hourse.

It's not Windows sleeping is it? Cause there's no concept of this in Elasticsearch.
What do the logs show?

Windows is definitely up and running. This morning, I ran my website in debug mode between 10 and 10:30 AM. The first request hit the elasticsearch and return no result... then I sent the same request again and got the result. This what I see in Elasticsearch console (I have added 2 lines around the time that my first request was sent, can't remember the exact time)

[2020-05-07T23:23:52,253][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][563764] overhead, spent [449ms] collecting in the last [1s]
[2020-05-07T23:45:56,264][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][565084] overhead, spent [318ms] collecting in the last [1.1s]
[2020-05-07T23:50:35,945][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][565363] overhead, spent [270ms] collecting in the last [1s]
[2020-05-07T23:56:35,893][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][565722] overhead, spent [307ms] collecting in the last [1s]
[2020-05-07T23:58:05,350][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][565811] overhead, spent [387ms] collecting in the last [1.2s]
[2020-05-08T00:20:36,093][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][567155] overhead, spent [475ms] collecting in the last [1.1s]
[2020-05-08T00:31:06,018][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][567783] overhead, spent [257ms] collecting in the last [1s]
[2020-05-08T00:49:56,952][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][568910] overhead, spent [309ms] collecting in the last [1s]
[2020-05-08T00:53:01,005][INFO ][o.e.x.m.MlDailyMaintenanceService] triggering scheduled [ML] maintenance tasks
[2020-05-08T00:53:01,174][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [node-1] Deleting expired data
[2020-05-08T00:53:01,684][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [node-1] Completed deletion of expired data
[2020-05-08T00:53:01,748][INFO ][o.e.x.m.MlDailyMaintenanceService] Successfully completed [ML] maintenance tasks
[2020-05-08T00:57:40,094][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][569370] overhead, spent [273ms] collecting in the last [1s]
[2020-05-08T01:00:00,162][INFO ][o.e.x.m.e.l.LocalExporter] cleaning up [1] old indices
[2020-05-08T01:00:00,216][INFO ][o.e.c.m.MetaDataDeleteIndexService] [node-1] [.monitoring-es-6-2020.04.30/cuhzV4ziRDaYyVXLVoZKUw] deleting index
[2020-05-08T01:03:27,517][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][569716] overhead, spent [268ms] collecting in the last [1s]
[2020-05-08T01:40:19,351][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][571918] overhead, spent [287ms] collecting in the last [1s]
[2020-05-08T01:43:16,996][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][572095] overhead, spent [379ms] collecting in the last [1.3s]
[2020-05-08T01:51:37,708][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][572593] overhead, spent [322ms] collecting in the last [1.2s]
[2020-05-08T02:00:18,416][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][573111] overhead, spent [370ms] collecting in the last [1s]
-------------------------
[2020-05-08T10:04:06,568][WARN ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][young][574669][35677] duration [3.5s], collections [1]/[3.6s], total [3.5s]/[9.5m], memory [709.4mb]->[442.8mb]/[990.7mb], all_pools {[young] [266.2mb]->[77.5kb]/[266.2mb]}{[survivor] [2.4mb]->[1.8mb]/[33.2mb]}{[old] [440.8mb]->[440.9mb]/[691.2mb]}
[2020-05-08T10:04:06,964][WARN ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][574669] overhead, spent [3.5s] collecting in the last [3.6s]
[2020-05-08T10:04:13,348][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][574675] overhead, spent [436ms] collecting in the last [1.3s]
[2020-05-08T10:12:49,353][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][575188] overhead, spent [563ms] collecting in the last [1.3s]
[2020-05-08T10:18:18,838][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][575515] overhead, spent [298ms] collecting in the last [1s]
[2020-05-08T10:22:38,710][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][575774] overhead, spent [370ms] collecting in the last [1s]
[2020-05-08T10:24:08,669][WARN ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][575863] overhead, spent [549ms] collecting in the last [1s]
[2020-05-08T10:25:38,812][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][575953] overhead, spent [253ms] collecting in the last [1s]
[2020-05-08T10:25:59,853][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][575974] overhead, spent [253ms] collecting in the last [1s]
[2020-05-08T10:29:58,500][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][576211] overhead, spent [264ms] collecting in the last [1s]
------------------------
[2020-05-08T10:31:28,764][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][young][576300][35813] duration [864ms], collections [1]/[1.7s], total [864ms]/[9.6m], memory [662.3mb]->[451.3mb]/[990.7mb], all_pools {[young] [214mb]->[2.8mb]/[266.2mb]}{[survivor] [1.6mb]->[1.9mb]/[33.2mb]}{[old] [446.7mb]->[447mb]/[691.2mb]}
[2020-05-08T10:31:28,765][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][576300] overhead, spent [864ms] collecting in the last [1.7s]
[2020-05-08T10:32:59,390][WARN ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][young][576389][35820] duration [1s], collections [1]/[2s], total [1s]/[9.6m], memory [623mb]->[463.5mb]/[990.7mb], all_pools {[young] [174.3mb]->[14.6mb]/[266.2mb]}{[survivor] [1.6mb]->[1.9mb]/[33.2mb]}{[old] [447mb]->[447.2mb]/[691.2mb]}
[2020-05-08T10:32:59,404][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][576389] overhead, spent [1s] collecting in the last [2s]
[2020-05-08T10:34:31,649][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][576480] overhead, spent [349ms] collecting in the last [1s]
[2020-05-08T10:35:58,854][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][576566] overhead, spent [320ms] collecting in the last [1s]
[2020-05-08T10:41:56,287][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][576922] overhead, spent [311ms] collecting in the last [1s]
[2020-05-08T10:43:20,710][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][577006] overhead, spent [346ms] collecting in the last [1s]
[2020-05-08T10:47:21,959][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][577246] overhead, spent [365ms] collecting in the last [1.1s]
[2020-05-08T10:51:50,184][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][577513] overhead, spent [546ms] collecting in the last [1.2s]
[2020-05-08T10:53:19,519][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][577602] overhead, spent [396ms] collecting in the last [1s]
[2020-05-08T11:01:29,271][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][578090] overhead, spent [449ms] collecting in the last [1s]
[2020-05-08T11:14:49,509][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][578886] overhead, spent [316ms] collecting in the last [1s]
[2020-05-08T11:23:11,196][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][579386] overhead, spent [426ms] collecting in the last [1s]
[2020-05-08T11:43:19,963][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][580591] overhead, spent [307ms] collecting in the last [1.1s]
[2020-05-08T11:47:39,859][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][580850] overhead, spent [381ms] collecting in the last [1.3s]
[2020-05-08T11:49:00,103][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][580930] overhead, spent [398ms] collecting in the last [1s]

Is this the log that you are talking about?

Yep.

That's a little odd given most of your GCs are 1 second. Is there nothing between the 2020-05-08T02:00:18,416 and 2020-05-08T10:04:06,568 timestamps?

No, I copy pasted everything... I just added two lines to indicate the approximate time

Given you've got a pretty big break in timing there, no GC at all, it looks like something is happening.

Can you try running a loop with curl to / on the host every minute and leave it overnight to see what happens?

Sure, if you could give me a bit more details how to do it please.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.