Enforcing a 1:1 mapping between pods and cluster nodes for deploying multiple ElasticSearch/Logstash replicas with local storage

Context

I'm deploying the ELK stack (using the eck-stack chart) on a K3s cluster with OpenEBS for local storage. When deploying multiple Elasticsearch or Logstash replicas with local storage (e.g., OpenEBS `ReadWriteOnce` volumes), pods must remain on the same node to retain access to their local data.

Current workaround

I'm using the `nodeSets` implementation from eck-elasticsearch chart to enforce a 1:1 mapping between pods and nodes. Example structure :

eck-elasticsearch:
  enabled: true
  fullnameOverride: elasticsearch
  nodeSets:
  - name: es-set1
    count: 1
    config:
      node.store.allow_mmap: false
    podTemplate:
      spec:
        nodeSelector:
          kubernetes.io/hostname: node-1
    volumeClaimTemplates:
      - metadata:
          name: elasticsearch-data
        spec:
          accessModes: [ "ReadWriteOnce" ]
          resources:
            requests:
              storage: 100Gi

  - name: es-set2
    count: 1
    config:
      node.store.allow_mmap: false
    podTemplate:
      spec:
        nodeSelector:
          kubernetes.io/hostname: node-2
    volumeClaimTemplates:
      - metadata:
          name: elasticsearch-data
        spec:
          accessModes: [ "ReadWriteOnce" ]
          resources:
            requests:
              storage: 100Gi

Limitations

The issue with this workaround is that the Elastic Operator will create a StatefulSet for each nodeSet, which is probably not optimal. Plus, the eck-logstash chart does not support the `nodeSets` feature. Do you have any better idea for what I'm trying to achieve with my current context ?