JVM hang ,edn space 100%

hello:
we have two servers,We use Elasticsearch 2.3.2 and run two data nodes in server with 32G RAM and 32 Core CPU. Elasticsearch java heap size is set to 24G and the index is configured to 5 shards with 0 replica.

Elasticsearch2.3.2
java version "1.8.0_65"
Java(TM) SE Runtime Environment (build 1.8.0_65-b17)
Java HotSpot(TM) 64-Bit Server VM (build 25.65-b01, mixed mode)

After a period of time,the jvm hang. The Eden space used 100%,Another node is normal
my nodes is overloaded and you need to add more heap to it, or add another node?

S0 S1 E O M CCS YGC YGCT FGC FGCT GCT
72.71 95.32 100.00 48.53 97.96 96.41 11546 1248.249 46 24.405 1272.654
72.71 95.32 100.00 48.53 97.96 96.41 11546 1248.249 46 24.405 1272.654
72.71 95.32 100.00 48.53 97.96 96.41 11546 1248.249 46 24.405 1272.654
72.71 95.32 100.00 48.53 97.96 96.41 11546 1248.249 46 24.405 1272.654
72.71 95.32 100.00 48.53 97.96 96.41 11546 1248.249 46 24.405 1272.654
72.71 95.32 100.00 48.53 97.96 96.41 11546 1248.249 46 24.405 1272.654
test case:
public class CreateIndex implements Runnable
{
private Client client;

private BulkProcessor bulkProcessor;

public CreateLogIndex()
{
    try
    {
        client = TransportClient.builder().settings(Settings.builder().put("client.transport.sniff", true)
                                                        .put("cluster.name", "Elastic_2.3")).build()
            .addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName(IP), port));
        this.bulkProcessor = initBulkProcessor();
    }
    catch (UnknownHostException e)
    {

        e.printStackTrace();
    }
}


private BulkProcessor initBulkProcessor()
{
    return BulkProcessor.builder(this.client, new BulkProcessor.Listener()
    {
        public void beforeBulk(long executionId, BulkRequest request)
        {
        }

        public void afterBulk(long executionId, BulkRequest request, Throwable failure)
        {
        }

        public void afterBulk(long executionId, BulkRequest request, BulkResponse response)
        {
            if(response.hasFailures())
            {
                System.out.println("There was failures while executing bulk" + response.buildFailureMessage());
                throw new RuntimeException("response.buildFailureMessage()");
            }

        }
    }).setBulkActions(3000).setConcurrentRequests(2).build();
}

public void run()
{
   
    while(true)
    {
     
            Map<String, Object> map = new HashMap<String, Object>();
           
           //Generate data
            this.bulkProcessor.add(new IndexRequest("default_index", "default_index_type").source(map));
        }
   
}

}

we have two nodes,We use Elasticsearch 2.3.2 and run two data nodes in server with 32G RAM and 32 Core CPU. Elasticsearch java heap size is set to 24G and the index is configured to 5 shards with 0 replica.

Can you clarify for me?

Are you saying you have a single server with 32 GB RAM, and you are running two ES nodes on that server, allocating 24GB of HEAP to each?

I have two servers .