I have a question regarding the 'possible' scheduled jobs of ElasticSeach. We monitor the application with cURL and run it every 30 seconds to inspect the HTTPS response. Somehow, we receive an unavailable error response every night between 00.00 - 02.00.
Is there a scheduled task that does something really intense that blocks incoming traffic?
There's nothing in Elasticsearch that would cause this, and if the logs are not showing excessive GC or node disconnections, then it's external to the cluster.
Thanks. let me download the log files and inspect them again. I use the following files:
gc.log
es-cluster.log
edit:
I have found the following lines:
[2020-06-16T02:00:01,373][INFO ][o.e.c.r.a.AllocationService] [es-server01-prod.aws.company.domain] updating number_of_replicas to [1] for indices [.monitoring-es-6-2020.06.16]
[2020-06-16T02:00:03,083][INFO ][o.e.c.r.a.AllocationService] [es-server01-prod.aws.company.domain] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.monitoring-es-6-2020.06.16][0]] ...]).
[2020-06-16T02:00:07,539][INFO ][o.e.c.m.MetaDataMappingService] [es-server01-prod.aws.company.domain] [filebeat-2020.06.16/DeG8CocARomBTmxXp2LpXQ] update_mapping [doc]
[2020-06-16T02:00:08,451][INFO ][o.e.c.m.MetaDataCreateIndexService] [es-server01-prod.aws.company.domain] [.monitoring-kibana-6-2020.06.16] creating index, cause [auto(bulk api)], templates [.monitoring-kibana], shards [1]/[0], mappings [doc]
[2020-06-16T02:00:08,452][INFO ][o.e.c.r.a.AllocationService] [es-server01-prod.aws.company.domain] updating number_of_replicas to [1] for indices [.monitoring-kibana-6-2020.06.16]
[2020-06-16T02:00:10,216][INFO ][o.e.c.r.a.AllocationService] [es-server01-prod.aws.company.domain] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.monitoring-kibana-6-2020.06.16][0]] ...]).```
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.