Port Exhaustion

using NEST or ElasticsearchClient (1.9,2) outputting trace diagnostics from .net services I'm running to into port exhaustion problems.
netstat -ano | findstr 9200 | measure-object -line
yields well over 5000 established ports to the server. This number can reach over 10,000

I'm using both IndexAsync and BulkAsync - I dont care what the response is, only that the request was successfully submitted.

how can I better control this?

Hey @amccool What version of Elasticsearch are you running against?

You can use ConnectionLimit in NEST 2.x and 5.x to control the number of open TCP connections to an endpoint, but this is hardcoded in NEST 1.x to be 10,000:

You can however provide your own implementation of IConnection that derives from HttpConnection and override the AlterServicePoint method to set ConnectionLimit to a more reasonable number e.g. 80 as per 2.x and 5.x.

Then pass it to the constructor for ElasticClient to use it

var uri = new Uri("http://localhost:9200");
var settings = new ConnectionSettings(uri, defaultIndex: "default-index");
var client = new ElasticClient(settings, new CustomHttpConnectionWithLowerConnectionLimit(settings));

The clients are executing against a single node of ES running 5.3.1.
Thanks for the IConnection pointer.

No worries, let me know if it helps with port exhaustion :thumbsup:

fighting with my docker images. Cant wait to try the IConnection.

Quick question - by limiting the available ports, should I see port connection re-use? And does that backup into some sort of queue?

TCP connection pooling is managed by the .NET framework, so connections should be being reused already. In full .NET framework, this is controlled by ServicePointManager, and the framework will queue up a request on a connection until one becomes available.

Setting the limit to 1000 - I am getting 429 (too many requests) back from the server (docker). I "think" this has more to do with the server than my code. Still looking.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.