Data_stream.dataset: kubernetes.state_node produce empty fields

Hello - I've deployed ECK with a standalone elastic agent. In the Kubernetes integration configuration i've set kube-state-metrics.monitoring:8080 as the host
Then if i do a curl i can see the metrics exposed. But on the Kibana if the filter data_stream.dataset: kubernetes.state_node is applied i can see that all the fields are empty for the data view state_node-ad-hoc that is used in kubernetes dashboards. What am i doing wrong please

$ curl kube-state-metrics.monitoring:8080/metrics -s | grep kube | head
# HELP kube_configmap_annotations Kubernetes annotations converted to Prometheus labels.
# TYPE kube_configmap_annotations gauge
# HELP kube_configmap_labels [STABLE] Kubernetes labels converted to Prometheus labels.
# TYPE kube_configmap_labels gauge
# HELP kube_configmap_info [STABLE] Information about configmap.
# TYPE kube_configmap_info gauge
kube_configmap_info{namespace="core-purge",configmap="cronjob-code-invigo"} 1
kube_configmap_info{namespace="core-purge",configmap="kube-root-ca.crt"} 1
kube_configmap_info{namespace="load-kafka",configmap="kube-root-ca.crt"} 1
kube_configmap_info{namespace="space-ext-netsf-exposed",configmap="schemas-volume-coordinator"} 1

Then in agent logs i can see some entries

{"log.level":"info","@timestamp":"2024-07-18T06:05:56.884Z","message":"Non-zero metrics in the last 30s","component":{"binary":"metricbeat","dataset":"elastic_agent.metricbeat","id":"kubernetes/metrics-default","type":"kubernetes/metrics"},"log":{"source":"kubernetes/metrics-default"},"monitoring":{"ecs.version":"1.6.0","metrics":{"beat":{"cgroup":{"memory":{"mem":{"usage":{"bytes":860893184}}}},"cpu":{"system":{"ticks":1164540,"time":{"ms":170}},"total":{"ticks":7452350,"time":{"ms":1170},"value":7452350},"user":{"ticks":6287810,"time":{"ms":1000}}},"handles":{"limit":{"hard":999999,"soft":999999},"open":17},"info":{"ephemeral_id":"38bfd2ba-211c-4cdb-a8dc-231c9077a665","uptime":{"ms":227820061},"version":"8.14.1"},"memstats":{"gc_next":105680984,"memory_alloc":97010104,"memory_total":1665878525456,"rss":285343744},"runtime":{"goroutines":140}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":7}},"output":{"events":{"acked":2089,"active":0,"batches":2,"duplicates":8,"total":2097},"read":{"bytes":57938,"errors":2},"write":{"bytes":449999,"latency":{"histogram":{"count":22777,"max":539,"mean":119.232421875,"median":66,"min":25,"p75":208,"p95":257.75,"p99":463.25,"p999":538.8750000000001,"stddev":97.61532984473813}}}},"pipeline":{"clients":7,"events":{"active":170,"published":2266,"total":2266},"queue":{"acked":2097}}},"metricbeat":{"kubernetes":{"apiserver":{"events":1757,"success":1757},"container":{"events":132,"success":132},"event":{"events":5,"success":5},"node":{"events":3,"success":3},"pod":{"events":96,"success":96},"system":{"events":9,"success":9},"volume":{"events":264,"success":264}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":0.5,"15":0.55,"5":0.68,"norm":{"1":0.0039,"15":0.0043,"5":0.0053}}}}},"log.logger":"monitoring","log.origin":{"file.line":187,"file.name":"log/log.go","function":"github.com/elastic/beats/v7/libbeat/monitoring/report/log.(*reporter).logSnapshot"},"service.name":"metricbeat","ecs.version":"1.6.0"}