Spark/ES on kubernetes, co-location ok?

Hi all,

I've a question related to architecture. As anyone experienced a deployment of Elasticsearch and spark on a kubernetes cluster ?

My main concern is how to es-hadoop connector works in this situation in regards of data co-location (Architecture | Elasticsearch for Apache Hadoop [7.15] | Elastic) ?

Is the system able to starts my spark executors pods co-located with ES pods to avoid as much as possible network traffic for better performances ?

Thanks