Error: org.apache.http.ProtocolException: Not a valid protocol version: 0

  • We recently upgraded our development environment from Elasticsearch cluster version 8.6.2 to 8.15.1

  • We are using the High Level Rest Client with apiCompatibilityMode=true

  • After the upgrade, we started seeing sporadic 500 errors from Elastic queries. We had not seen these prior to the upgrade and we had apiCompatibilityMode set to true earlier as well.

  • When this happens we can see following exceptions in the logs:

    • org.apache.http.ProtocolException: Not a valid protocol version: 0 at org.apache.http.impl.nio.codecs.AbstractMessageParser.parse(AbstractMessageParser.java:209) at org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:245) at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:81) at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:39) at org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:121) at org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162) at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337) at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315) at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276) at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104) at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:591) at java.base/java.lang.Thread.run(Thread.java:840) Caused by: org.apache.http.ParseException: Not a valid protocol version: 0 at org.apache.http.message.BasicLineParser.parseProtocolVersion(BasicLineParser.java:134) at org.apache.http.message.BasicLineParser.parseStatusLine(BasicLineParser.java:366) at org.apache.http.impl.nio.codecs.DefaultHttpResponseParser.createMessage(DefaultHttpResponseParser.java:112) at org.apache.http.impl.nio.codecs.DefaultHttpResponseParser.createMessage(DefaultHttpResponseParser.java:50) at org.apache.http.impl.nio.codecs.AbstractMessageParser.parseHeadLine(AbstractMessageParser.java:156) at org.apache.http.impl.nio.codecs.AbstractMessageParser.parse(AbstractMessageParser.java:207)
  • Digging through the TRACE level logs of the Elasticsearch client and Apache have yielded no results and it's still unclear why this issue is sporadically happening

  • We have been unable to replicate the issue through curl or kibana

  • We have tried a few configuration changes in the Http Client, but have not been able to stabilize our Dev environment so far

Please advise if anyone has ran into this problem and any suggestions to help resolve it.

org.apache.http.ProtocolException: Not a valid protocol version means that the server is not sending a proper http response.

A common cause is that the client is configured to use http while the server expects https, or the other way around.

Can you check the configuration of your application and make sure it uses the correct protocol ?

1 Like

Thanks for your quick response. Yes, our application makes SSL connections and our server endpoint is https.

One additional point to share, which we found interesting is, when we set ConnectionReuseStrategy on the RestClientBuilder (using setHttpClientConfigCallback) to NoConnectionReuseStrategy, the protocolException does not appear anymore. However, we do not want to go with this solution as it would not reuse connections.

Since I understand you're using the same HLRC version, this certainly comes from a change in the environment (including the cluster upgrade) and not the client library. The fact that the problem disappears with NoConnectionReuseStrategy seems to confirm that.

The 500 errors are also unrelated to the client and indicate an issue on the Elasticsearch server side, which you should see in the server logs.

Or are there some proxy/gateway/load-balancer between the application and the server that could cause the issue?

Thanks! Yes, our cluster is behind a load balancer.

So you should definitely investigate in this area.

1 Like