Not able to run Heartbeat with monitorType browser in Kubernetes

I want to use the Synthetic Monitoring feature from Elastic.
I have deployed Heartbeat on Kubernetes (manifest: https://raw.githubusercontent.com/elastic/beats/7.15/deploy/kubernetes/heartbeat-kubernetes.yaml) and added heartbeat monitors.
I tried it on a Kubernetes Cluster with version 1.19 on IBM Cloud as well as in Minikube with version 1.19 and there was the same error message on both environments.
The http heartbeat monitor are working fine. But as soon as I use heartbeat.monitors with type browser, I get the following error:

2021-10-15T09:28:23.348Z	INFO	browser/browser.go:35	Synthetic browser monitor detected! Please note synthetic monitors are a beta feature!
2021-10-15T09:28:23.464Z	INFO	[monitoring]	log/log.go:153	Total non-zero metrics	{"monitoring": {"metrics": {"beat":{"cgroup":{"cpu":{"cfs":{"period":{"us":20000}},"id":"fe32ac0c04ec259a6c372e35ac4a45f5915b1092308878bf52d33644ab3c9268"},"cpuacct":{"id":"fe32ac0c04ec259a6c372e35ac4a45f5915b1092308878bf52d33644ab3c9268","total":{"ns":310777387}},"memory":{"id":"fe32ac0c04ec259a6c372e35ac4a45f5915b1092308878bf52d33644ab3c9268","mem":{"limit":{"bytes":209715200},"usage":{"bytes":33722368}}}},"cpu":{"system":{"ticks":60,"time":{"ms":79}},"total":{"ticks":220,"time":{"ms":262},"value":220},"user":{"ticks":160,"time":{"ms":183}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":24},"info":{"ephemeral_id":"d5c57419-8096-4e73-8b71-0076bfc5dd63","uptime":{"ms":197},"version":"7.14.1"},"memstats":{"gc_next":8783904,"memory_alloc":5096184,"memory_sys":74793992,"memory_total":16609168,"rss":86163456},"runtime":{"goroutines":50}},"heartbeat":{"browser":{"monitor_stops":1},"http":{"endpoint_starts":10,"monitor_starts":8},"scheduler":{"jobs":{"active":7},"tasks":{"active":7}}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"active":0},"type":"elasticsearch"},"pipeline":{"clients":10,"events":{"active":0},"queue":{"max_events":4096}}},"system":{"cpu":{"cores":4},"load":{"1":13.94,"15":13.07,"5":13.7,"norm":{"1":3.485,"15":3.2675,"5":3.425}}}}}}
2021-10-15T09:28:23.466Z	INFO	[monitoring]	log/log.go:154	Uptime: 199.519761ms
2021-10-15T09:28:23.467Z	INFO	[monitoring]	log/log.go:131	Stopping metrics logging.
2021-10-15T09:28:23.468Z	INFO	instance/beat.go:479	heartbeat stopped.
2021-10-15T09:28:23.468Z	ERROR	instance/beat.go:989	Exiting: could not create monitor: job err script monitors cannot be run as root! Current UID is 0
Exiting: could not create monitor: job err script monitors cannot be run as root! Current UID is 0

I have used the example from elastic (Quickstart: Synthetic monitoring via Docker | Observability Guide [7.15] | Elastic)

heartbeat.monitors:
    - type: browser
      id: synthetic-inline-suites
      name: Elastic website
      schedule: '@every 1m'
      source:
        inline:
          script: |-
            step("load homepage", async () => {
              await page.goto('https://www.elastic.co');
            });
            step("hover over products menu", async () => {
              await page.hover('css=[data-nav-item=products]');
            });

The kubernetes yaml for heartbeat runs as root user, but if I dont run it as root, then the needed files for heartbeat like heartbeat.yml cannot be loaded.
Do I miss something or is there any way to fix this or does it just not work yet for kubernetes?

Thanks and Regards
Ben Stucke

Apologies for this confusion @Ben_Stucke , this is something we're working on, and in fact improvements have already been merged into our 7.16 branch, where we actually just automatically setuid to a regular user even if started by root in our official docker containers.

It seems that we should also update the k8s file you linked to as well.

The workaround in your case would be to:

  1. Run as a non-root user (say heartbeat)
  2. Ensure that all config files are owned by that user.
1 Like

@Andrew_Cholakian1 Can I already download the docker image from the 7.16.0? This image does not seem to exist yet. I have found the image: "docker.elastic.co/beats/heartbeat:7.16.0-a907c0d5-SNAPSHOT". But this image does not seem to have this new fix.

Do you know how I can make the config files owned by the user (link to documentation would be fine too). I have problems figuring out, how I can do it via the kubernetes files.

Currently my kubernetes.yaml file looks like this:

apiVersion: v1
kind: ConfigMap
metadata:
  name: heartbeat-deployment-config
  namespace: chatbot-monitoring
  labels:
    k8s-app: heartbeat
data:
  heartbeat.yml: |-
    heartbeat.monitors:
    - type: browser
      id: synthetic-inline-suites
      name: Elastic website
      schedule: '@every 1m'
      source:
        inline:
          script: |-
            step("load homepage", async () => {
              await page.goto('https://www.elastic.co');
            });
            step("hover over products menu", async () => {
              await page.hover('css=[data-nav-item=products]');
            });

    #heartbeat.autodiscover:
    #  # Autodiscover pods
    #  providers:
    #    - type: kubernetes
    #      resource: pod
    #      scope: cluster
    #      node: ${NODE_NAME}
    #      hints.enabled: true
    #
    #  # Autodiscover services
    #  providers:
    #    - type: kubernetes
    #      resource: service
    #      scope: cluster
    #      node: ${NODE_NAME}
    #      hints.enabled: true
    #
    #  # Autodiscover nodes
    #  providers:
    #    - type: kubernetes
    #      resource: node
    #      node: ${NODE_NAME}
    #      scope: cluster
    #      templates:
    #        # Example, check SSH port of all cluster nodes:
    #        - condition: ~
    #          config:
    #            - hosts:
    #                - ${data.host}:22
    #              name: ${data.kubernetes.node.name}
    #              schedule: '@every 10s'
    #              timeout: 5s
    #              type: tcp

    processors:
      - add_cloud_metadata:

    cloud.id: ${ELASTIC_CLOUD_ID}
    cloud.auth: ${ELASTIC_CLOUD_AUTH}

    output.elasticsearch:
      hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}
---
# Deploy singleton instance in the whole cluster for some unique data sources, like kube-state-metrics
apiVersion: apps/v1
kind: Deployment
metadata:
  name: heartbeat
  namespace: chatbot-monitoring
  labels:
    k8s-app: heartbeat
spec:
  selector:
    matchLabels:
      k8s-app: heartbeat
  template:
    metadata:
      labels:
        k8s-app: heartbeat
    spec:
      serviceAccountName: heartbeat
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: heartbeat
        image: docker.elastic.co/beats/heartbeat:7.15.0
        args: [
          "-c", "/etc/heartbeat.yml",
          "-e",
        ]
        env:
        - name: ELASTIC_CLOUD_ID
          valueFrom:
            secretKeyRef:
              name: elastic
              key: cloudId
        - name: ELASTIC_CLOUD_AUTH
          valueFrom:
            secretKeyRef:
              name: elastic
              key: cloudAuth
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        securityContext:
          runAsUser: 1
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/heartbeat.yml
          readOnly: true
          subPath: heartbeat.yml
        - name: data
          mountPath: /usr/share/heartbeat/data
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: heartbeat-deployment-config
      - name: data
        hostPath:
          path: /var/lib/heartbeat-data
          type: DirectoryOrCreate

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: heartbeat
subjects:
- kind: ServiceAccount
  name: heartbeat
  namespace: chatbot-monitoring
roleRef:
  kind: ClusterRole
  name: heartbeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: heartbeat
  namespace: chatbot-monitoring
subjects:
  - kind: ServiceAccount
    name: heartbeat
    namespace: chatbot-monitoring
roleRef:
  kind: Role
  name: heartbeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: heartbeat-kubeadm-config
  namespace: chatbot-monitoring
subjects:
  - kind: ServiceAccount
    name: heartbeat
    namespace: chatbot-monitoring
roleRef:
  kind: Role
  name: heartbeat-kubeadm-config
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: heartbeat
  labels:
    k8s-app: heartbeat
rules:
- apiGroups: [""]
  resources:
  - nodes
  - namespaces
  - pods
  - services
  verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
  resources:
    - replicasets
  verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: heartbeat
  # should be the namespace where heartbeat is running
  namespace: chatbot-monitoring
  labels:
    k8s-app: heartbeat
rules:
  - apiGroups:
      - coordination.k8s.io
    resources:
      - leases
    verbs: ["get", "create", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: heartbeat-kubeadm-config
  namespace: chatbot-monitoring
  labels:
    k8s-app: heartbeat
rules:
  - apiGroups: [""]
    resources:
      - configmaps
    resourceNames:
      - kubeadm-config
    verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: heartbeat
  namespace: chatbot-monitoring
  labels:
    k8s-app: heartbeat
---

This topic was automatically closed 24 days after the last reply. New replies are no longer allowed.