Thanks @Itay_Bittan, one thing that sticks out to me:
Containers:
elasticsearch:
Limits:
cpu: 6900m
memory: 59Gi
Requests:
cpu: 6900m
memory: 59Gi
Volume Claims:
Name: elasticsearch-data
StorageClass: gp3-elasticsearch-aud
Capacity: 500Gi
The above is quite heavy compute specs (especially the RAM) for ~500GB of data. While this isn't inherently bad, this just kind of stands out to me.
Would you be able to describe your use case a bit more in detail, and elaborate a bit more on:
We are heavily indexing data (bulks)
I see you're using GP3 storage, could you provide the EC2 instance type you're using?
Also, would be you be able to provide an example of your dedicated master node (or provide the entire config you're using to deploy the entire cluster)?
(There are some Kubernetes specific things you can do when deploying Elasticsearch to Kubernetes to help improve performance. I've covered them here in the past: Slower Perfomance with Elaticsearch cluster in kubernetes compared to Docker - #9 by BenB196)