Memory leak in elasticsearch java rest client version 5.5

Hi Team,

I am having memory leak in elasticsearch java rest client.

since last 5-6 days i am analysing memory leak. i see there is leak in es rest client version 5.5

my client code is following:

    **public void IndexComputeController (QueryIndexer computeIndex) {**
            byte[] wikiArray;
            String str = null;
            IndexResponse response = null;

            String filePath = computeIndex.getfilePath();
            String index    = computeIndex.getindex();
            Long   id       = computeIndex.getId();

            System.setProperty("io.netty.allocator.type", "unpooled");

            // Creating pipeline for ingest attachment plugin
            createPipeline (); 

            final CountDownLatch latch = new CountDownLatch(1);

            try {
                RestClient requester = RestClient.builder( new HttpHost ("192.168.*.*", 9300),
                                                           new HttpHost ("192.168.*.*", 9200)).build();
                HttpEntity body = new NStringEntity("{" +
                                "\"data\":\"" + encoder(filePath) + "\"" + ",\n" +
                                "\"title\":\"" + filePath + "\"" +
                                "}", ContentType.APPLICATION_JSON);
                HashMap<String,String> param = new HashMap<String,String>();
                param.put("pipeline", "attachment");
                Response httpresponse = requester.performRequest("PUT", "/"+index+"/"+"doc"+"/"+id, param, body);
                } catch (Exception e) {

   **public static boolean createPipeline(){**
    RestClient requester = RestClient.builder( new HttpHost ("192.168.*.*", 9300),
                           new HttpHost ("192.168.*.*", 9200)).build();
    HttpEntity body = new NStringEntity("{ \"description\" : \"Extract attachment information\", \n" +
            "  \"processors\": [\n" +
            "    {\n" +
            "      \"attachment\": {\n" +
            "        \"field\": \"data\",\n" +
            "        \"indexed_chars\": -1\n" +
            "      }\n" +
            "    },\n" +
            "    {\n" +
            "      \"set\": {\n" +
            "        \"field\": \"attachment.title\",\n" +
            "        \"value\": \"{{title}}\"\n" +
            "      }\n" +
            "    }\n" +
            "  ] \n" +
    try {
        requester.performRequest("PUT","/_ingest/pipeline/attachment", Collections.<String,String>emptyMap(), body);
    } catch (IOException e) {

        return false;

    return true;

   **public static String encoder(String path) throws Exception {**

        String[] splitPath = path.split("\\.");

        File file = new File(path);

        if (!file.exists()) {
            throw new Exception("File does not exist");

        //Reading and encoding files
        byte fileContent[] = Files.readAllBytes(file.toPath());
        String result64 = javax.xml.bind.DatatypeConverter.printBase64Binary(fileContent);

        return result64;

As soon i start uploading files (i am using ingest attachment plugin) when file reach to 117 its consume around 5GB memory and it say heap space is running out.

My ES server is running on different machine which is having ip address that i have mentioned in client side code.

Does anybody facing the same issue ?

How can i get rid of this memory leak.

Thank you very much in advance :slight_smile:

It give following error in my java app

 exception is java.lang.OutOfMemoryError: Java heap space] with root cause
java.lang.OutOfMemoryError: Java heap space
	at javax.xml.bind.DatatypeConverterImpl._printBase64Binary(
	at javax.xml.bind.DatatypeConverterImpl._printBase64Binary(
	at javax.xml.bind.DatatypeConverterImpl.printBase64Binary(
	at javax.xml.bind.DatatypeConverter.printBase64Binary(
	at net.elastic.spring.recipe.ElastiSearchService.encoder(
	at net.elastic.spring.recipe.ElastiSearchService.IndexComputeController(
	at net.elastic.spring.controller.QueryIndexerRestController.createIndex(
	at sun.reflect.GeneratedMethodAccessor46.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(
	at java.lang.reflect.Method.invoke(
	at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(
	at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(
	at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(
	at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(
	at org.springframework.web.servlet.DispatcherServlet.doDispatch(
	at org.springframework.web.servlet.DispatcherServlet.doService(
	at org.springframework.web.servlet.FrameworkServlet.processRequest(
	at org.springframework.web.servlet.FrameworkServlet.doPost(

The stack trace suggests that the problem is the file you are loading into memory in the encoder method, not the rest client.

Hi Adrien,

Thank you very much for your reply.

I was thinking the same initially by looking into backtrace.

I commented out code which do indexing operation (perform request code) and kept encoder code un commented. I saw there was no memory leak.

I was going through few article/blog what is best method for Base64 encoding.

I am thinking mostly bug is in encoder...

I will update you with my finding until i solved this issue.

Could you recommend me any encoder which is use for Elastic Search for best performance. I am using ingestattachment plugin for indexing pdf, doc, etc... files.

Thank you :slight_smile:

Hi @jpountz

I got it solved few days back. Encoder was right. but problem was for every post request i was building ES java REST client again and again.

I solve this issue by only creating ES REST Client at the time of starting my application. and closing it when my application closed :). Now when application start then at the time of initialization it will create REST client and pipeline . All future POST request will use the same rest client.


This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.