Intermittent proxy errors in Kibana 7.9.3 stack monitoring through Apache httpd

We recently upgraded one of our clusters to 7.9.3 (from 6.8.12), and I'm now seeing frequent, but intermittent, proxy errors while in stack monitoring. For instance here is a screenshot of a minute or so in the monitoring overview:

Sample httpd log from a request:

[Mon Nov 02 18:23:03.907127 2020] [proxy_http:error] [pid 112] (104)Connection reset by peer: [client ...:55770] AH01102: error reading status line from remote server kibanamon:5601, referer: https://.../kibanamon/app/monitoring
[Mon Nov 02 18:23:03.907160 2020] [proxy:error] [pid 112] [client ...:55770] AH00898: Error reading from remote server returned by /kibanamon/api/monitoring/v1/alert/5Hic1QrdQTmZwzbEPNUFLw/status, referer:https://.../kibanamon/app/monitoring ... - kretzke
[02/Nov/2020:18:23:03 +0000] "POST /kibanamon/api/monitoring/v1/alert/5Hic1QrdQTmZwzbEPNUFLw/status HTTP/1.1"502 506 "https://.../kibanamon/app/monitoring" "Mozilla/5.0 (Macintosh;Intel Mac OS X 10.14; rv:82.0) Gecko/20100101 Firefox/82.0"0

There are no related messages in the kibana log with logging.verbose:true

Kibana and httpd (2.4.6) are running in Docker containers on the same host, under docker-compose. We didn't see any issues like this with Kibana 6.8.12, even on the same host through the same httpd proxy.

httpd is running a basic reverse proxy config:

ProxyPass /kibanamon http://kibanamon:5601
ProxyPassReverse /kibanamon http://kibanamon:5601
<location /kibanamon>
    AuthType Shibboleth
    [... auth settings]
</location>

Seems like a bug somewhere in the monitoring server, but I'm reaching out here first in case anyone has some ideas; maybe we need to make some changes to the proxy settings?

I searched the interweb for AH01102: error reading status line from remote server and it seems to be an httpd message that can happen occasionally, especially if the back-end response takes a long time.

You can search for yourself, but the basic guidance that might apply to you is to try increasing the Timeout or ProxyTimeout

Thanks for the suggestions Tim. We don't have ProxyTimeout set, so afaik it will wait for the server to respond, and the global TimeOut is 900 seconds, so we're not hitting that. In any case, if the proxy server were closing the connection shouldn't Kibana log the failed request?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.