Not all logs are being sent on ES, requests are timing out

Hi All,

We have been having issues with there being gaps in Metricbeats being sent to Kibana.

When checking the Metricbeat logs, the following errors are displayed:

[2021-10-28T11:42:53,422][INFO ][o.e.x.s.a.AuthorizationService] [node-1] Took [55ms] to resolve [234] indices for action [indices:data/read/search[phase/query]] and user [elastic]
[2021-10-28T12:20:43,341][WARN ][o.e.c.InternalClusterInfoService] [node-1] failed to retrieve stats for node [1ETPDccKSG2smLl8DOV-eg]: [node-1][127.0.0.1:9300][cluster:monitor/nodes/stats[n]] request_id [100345830] timed out after [14812ms]
[2021-10-28T12:20:43,342][WARN ][o.e.c.InternalClusterInfoService] [node-1] failed to retrieve shard stats from node [1ETPDccKSG2smLl8DOV-eg]: [node-1][127.0.0.1:9300][indices:monitor/stats[n]] request_id [100345831] timed out after [14812ms]
[2021-10-28T12:20:47,342][WARN ][o.e.t.TransportService   ] [node-1] Received response for a request that has timed out, sent [18.8s/18820ms] ago, timed out [4s/4008ms] ago, action [cluster:monitor/nodes/stats[n]], node [{node-1}{1ETPDccKSG2smLl8DOV-eg}{O3fwO2YWRvuirdjX_olr4w}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}{ml.machine_memory=16826281984, xpack.installed=true, transform.node=true, ml.max_open_jobs=512, ml.max_jvm_size=9663676416}], id [100345830]
[2021-10-28T12:20:47,397][WARN ][o.e.t.TransportService   ] [node-1] Received response for a request that has timed out, sent [19s/19020ms] ago, timed out [4.2s/4208ms] ago, action [indices:monitor/stats[n]], node [{node-1}{1ETPDccKSG2smLl8DOV-eg}{O3fwO2YWRvuirdjX_olr4w}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}{ml.machine_memory=16826281984, xpack.installed=true, transform.node=true, ml.max_open_jobs=512, ml.max_jvm_size=9663676416}], id [100345831]
[2021-10-28T12:23:00,158][INFO ][o.e.x.m.j.p.DataCountsReporter] [node-1] [source_ip_url_count_ecs] 800000 records written to autodetect; missingFieldCount=847412, invalidDateCount=0, outOfOrderCount=0
[2021-10-28T12:33:24,901][WARN ][o.e.c.InternalClusterInfoService] [node-1] failed to retrieve stats for node [1ETPDccKSG2smLl8DOV-eg]: [node-1][127.0.0.1:9300][cluster:monitor/nodes/stats[n]] request_id [100458438] timed out after [15011ms]
[2021-10-28T12:33:24,902][WARN ][o.e.c.InternalClusterInfoService] [node-1] failed to retrieve shard stats from node [1ETPDccKSG2smLl8DOV-eg]: [node-1][127.0.0.1:9300][indices:monitor/stats[n]] request_id [100458439] timed out after [15011ms]
[2021-10-28T12:33:46,625][WARN ][o.e.t.TransportService   ] [node-1] Received response for a request that has timed out, sent [36.8s/36828ms] ago, timed out [21.8s/21817ms] ago, action [cluster:monitor/nodes/stats[n]], node [{node-1}{1ETPDccKSG2smLl8DOV-eg}{O3fwO2YWRvuirdjX_olr4w}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}{ml.machine_memory=16826281984, xpack.installed=true, transform.node=true, ml.max_open_jobs=512, ml.max_jvm_size=9663676416}], id [100458438]
[2021-10-28T12:33:46,707][WARN ][o.e.t.TransportService   ] [node-1] Received response for a request that has timed out, sent [36.8s/36828ms] ago, timed out [21.8s/21817ms] ago, action [indices:monitor/stats[n]], node [{node-1}{1ETPDccKSG2smLl8DOV-eg}{O3fwO2YWRvuirdjX_olr4w}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}{ml.machine_memory=16826281984, xpack.installed=true, transform.node=true, ml.max_open_jobs=512, ml.max_jvm_size=9663676416}], id [100458439]
[2021-10-28T12:34:09,904][WARN ][o.e.c.InternalClusterInfoService] [node-1] failed to retrieve stats for node [1ETPDccKSG2smLl8DOV-eg]: [node-1][127.0.0.1:9300][cluster:monitor/nodes/stats[n]] request_id [100460922] timed out after [15031ms]
[2021-10-28T12:34:09,905][WARN ][o.e.c.InternalClusterInfoService] [node-1] failed to retrieve shard stats from node [1ETPDccKSG2smLl8DOV-eg]: [node-1][127.0.0.1:9300][indices:monitor/stats[n]] request_id [100460923] timed out after [15031ms]
[2021-10-28T12:34:10,631][WARN ][o.e.t.TransportService   ] [node-1] Received response for a request that has timed out, sent [15.8s/15832ms] ago, timed out [801ms/801ms] ago, action [cluster:monitor/nodes/stats[n]], node [{node-1}{1ETPDccKSG2smLl8DOV-eg}{O3fwO2YWRvuirdjX_olr4w}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}{ml.machine_memory=16826281984, xpack.installed=true, transform.node=true, ml.max_open_jobs=512, ml.max_jvm_size=9663676416}], id [100460922]
[2021-10-28T12:34:10,686][WARN ][o.e.t.TransportService   ] [node-1] Received response for a request that has timed out, sent [15.8s/15832ms] ago, timed out [801ms/801ms] ago, action [indices:monitor/stats[n]], node [{node-1}{1ETPDccKSG2smLl8DOV-eg}{O3fwO2YWRvuirdjX_olr4w}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}{ml.machine_memory=16826281984, xpack.installed=true, transform.node=true, ml.max_open_jobs=512, ml.max_jvm_size=9663676416}], id [100460923]

Any help on this will be appreciated,
Thanks.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.