Transportclient not closing properly with tomcat/spring

Hello,

I am currently running Elasticsearch 5.0.0 and using Java the same java transport client. I have created a dao bean which connects to Elasticsearch such as:

<bean id="searchDAO" class="search.SearchDAO" init-method="init" destroy-method="destroy"/>

In the init method, I have the following:

</>
try {
Settings settings = Settings.builder().put("client.transport.sniff", true)
.put("client.transport.ping_timeout", timeout, TimeUnit.SECONDS).put("cluster.name", this.clusterName)
.build();

        TransportClient clientTemp= new PreBuiltTransportClient(settings);
        clientTemp.addTransportAddress(
                new InetSocketTransportAddress(InetAddress.getByName(elasticSearchHost), 9300));
        client = clientTemp;
    }
    catch (UnknownHostException e) {
        e.printStackTrace();
        log.error("Invalid host, unable to connect: " + e.getMessage());
        
    } 

</>

and in the destroy method:

</>
if (client == null) {
return;
}
log.info("Shutting down search dao");

    client.threadPool().shutdownNow();
    try {
        client.threadPool().awaitTermination(10, TimeUnit.SECONDS);
    }
    catch (InterruptedException e) {
        log.error("Unable to close the thread pool for elasticsearch");
    }
    finally {
        client.close();
        client = null;
    }

</>

The issue is that tomcat is unable to shutdown properly due to dangling threads that have been created by the transportclient. I get ugly messages about possible memory leak in catalina.out and I can see through VisualVM that tomcat has active elasticsearch client threads.

SEVERE: The web application [] appears to have started a thread named [elasticsearch[client][transport_client_boss][T#14]] but has failed to stop it. This is very likely to create a memory leak.
Nov 30, 2016 10:32:40 PM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads
SEVERE: The web application [] appears to have started a thread named [elasticsearch[client][transport_client_boss][T#15]] but has failed to stop it. This is very likely to create a memory leak.

Is there something I am missing while closing connection?

That is interesting, this should not be happening. We close the transport service which closes the underlying transport which creates these threads when the transport client is closed, and all of this is done safely. Also, our test runner detects lingering threads and fails test suites that where if any lingering threads do not die within a certain timeout.

Would you please provide a very simple reproduction (as simple as you can possibly make it)?

Note that you do not need to stop the thread pool explicitly, we do that when the transport client is closed, and that's not where these threads come from anyway.

1 Like

Hey,

I updated the code to not explicitly call shutdownNow on the threadpool. Upon closer look, I saw that there was an
exception being thrown by spring as it was not able to properly call the destroy method within the searchDAO.

Its really weird but sometimes I am seeing transport clients not closing properly whereas other times its fine. I will work on getting some sort of simple reproduction; I feel that might be hard to do if something specific within our application might be causing this.

I am however still getting complaints from catalina.out about threadlocal/memory leaks at all times.

</>
SEVERE: The web application [] appears to have started a thread named [elasticsearch[client][transport_client_boss][T#15]] but has failed to stop it. This is very likely to create a memory leak.
Dec 01, 2016 3:05:34 PM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads
SEVERE: The web application [] appears to have started a thread named [elasticsearch[client][generic][T#2]] but has failed to stop it. This is very likely to create a memory leak.
Dec 01, 2016 3:05:34 PM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads
</>

Thanks,

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.