Consistantly losing monitoring data for clusters

We have multiple clusters which log their monitoring data to their own monitoring clusters. All clusters consistantly stop logging monitoring data on multiple if not all nodes in the cluster after an extended period of time (days to weeks). The error logs from the nodes that stop logging have the following errors. Restarting the nodes experiencing the issue will resolve the issue. I am able to perform the query to /?filter_path=version.number of the monitoring cluster from the nodes experiencing the problem and they return the version info as expected.

[es5-node] # curl -v http://monitoring.cluster:9200/?filter_path=version.number

*   Trying ...
* Connected to monitoring.cluster () port 9200 (#0)
> GET /?filter_path=version.number HTTP/1.1
> Host: monitoring.cluster:9200
> User-Agent: curl/7.47.0
> Accept: */*
< HTTP/1.1 200 OK
< Date: Mon, 10 Jul 2017 16:27:56 GMT
< Content-Type: application/json; charset=UTF-8
< Content-Length: 47
< Connection: keep-alive
  "version" : {
    "number" : "5.2.0"

[2017-07-10T17:09:22,594][INFO ][o.e.x.m.e.Exporters      ] [master-ip] skipping exporter [es5-monitoring] as it is not ready yet
[2017-07-10T17:09:37,689][WARN ][o.e.x.m.e.h.NodeFailureListener] connection failed to node at [http://monitoring.cluster:9200]
[2017-07-10T17:09:37,689][ERROR][o.e.x.m.e.h.VersionHttpResource] failed to verify minimum version [5.0.0-beta1] on the [xpack.monitoring.exporters.es5-monitoring] monitoring cluster No route to host
    at Method) ~[?:?]
    at ~[?:?]
    at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvent( ~[?:?]
    at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvents( ~[?:?]
    at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute( ~[?:?]
    at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute( ~[?:?]
    at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$ ~[?:?]
    at [?:1.8.0_111]

Hi @ryan.dyer

Is it possible that the DNS hostname's underlying IP address is changing after Elasticsearch is starting? If so, the JVM isn't going to notice because of DNS caching. You can update this in the $JAVA_HOME/lib/security/ file for Java itself via the networkaddress.cache.ttl setting.

This documentation is for the Elastic Cloud, but it's true for any instance of Elasticsearch.

Hope that helps,

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.