Failed to encode event: unsupported float value

I'm doing a POC to use a metricbeat pod to scrape all metrics from a prometheus pod in the same namespace of a kubernetes cluster. The metricbeat pod is able to connect to the prometheus pod's /federate end point and gather results. Something in the result set is causing a JSON processing error. I found another post with a similar error and someone recommended turning the DEBUG logging level on. I did that and harvested a the errors and the associated JSON.
First, here is the configuration of the metricbeat pod:

#MetricBeat Deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: metricbeat-prometheus
namespace: monitoring
labels:
app: metricbeat-prometheus
spec:
template:
metadata:
labels:
app: metricbeat-prometheus
spec:
serviceAccountName: metricbeat-prometheus
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: metricbeat-prometheus
image: docker.elastic.co/beats/metricbeat:7.3.1
args: [
"-c", "/etc/metricbeat.yml",
"-e",
"-system.hostfs=/hostfs",
]
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
- name: setup.dashboards.enabled
value: "true"
- name: setup.kibana.host
value: "kibana.logging:5601"
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/metricbeat.yml
readOnly: true
subPath: metricbeat.yml
- name: modules
mountPath: /usr/share/metricbeat/modules.d
readOnly: true
- name: dockersock
mountPath: /var/run/docker.sock
- name: proc
mountPath: /hostfs/proc
readOnly: true
- name: cgroup
mountPath: /hostfs/sys/fs/cgroup
readOnly: true
- name: data
mountPath: "/data/metricbeat"
volumes:
- name: proc
hostPath:
path: /proc
- name: cgroup
hostPath:
path: /sys/fs/cgroup
- name: dockersock
hostPath:
path: /var/run/docker.sock
- name: config
configMap:
defaultMode: 0600
name: metricbeat-daemonset-config
- name: modules
configMap:
defaultMode: 0600
name: metricbeat-daemonset-modules
- name: data
persistentVolumeClaim:
claimName: metricbeat-prometheus-pvc

Here are the configmaps the yaml refers to:
#MetricBeat Modules ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-daemonset-modules
namespace: monitoring
labels:
app: metricbeat-prometheus
data:
prometheus.yml: |-
- module: prometheus
period: 10s
hosts: ["prometheus-k8s.monitoring:9090"]
metrics_path: "/federate"
query:
match: '{name)!=""}'
#username: "user"
#password: "secret"

  # This can be used for service account based authorization:
  #  bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
  #ssl.certificate_authorities:
  #  - /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt

Here is the other configmap:

#MetricBeat Base ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-daemonset-config
namespace: monitoring
labels:
app: metricbeat-prometheus
data:
metricbeat.yml: |-
metricbeat.config.modules:
# Mounted metricbeat-daemonset-modules configmap:
path: {path.config}/modules.d/*.yml # Reload module configs as they change: reload.enabled: false # To enable hints based autodiscover uncomment this: metricbeat.autodiscover: providers: - type: kubernetes host: {NODE_NAME}
hints.enabled: true
#logging.level: debug
output.elasticsearch:
#hosts: ['{ELASTICSEARCH_HOST:elasticsearch}:{ELASTICSEARCH_PORT:9200}']
hosts: '[elasticsearch-host-ip]:9200'
username: {ELASTICSEARCH_USERNAME} password: {ELASTICSEARCH_PASSWORD}

There is another very similar post where it was recommended to enable the DEBUG logging level. You can see that post here. I have done that and I have some data captured from a transaction but it's very large. Assuming someone is willing to help, could I send you the file directly instead of posting it here? I didn't see a way to attach a file.

@kaiyan-sheng do you have any thoughts on the correct way to handle this?