Filebeat Auditd module: Failed to Parse field [error] of type [keyword]

Hello! I have filebeat running in kubernetes with audtid module as such:

Loading and configuring auditd module

filebeat.modules:
      - module: auditd
        log:
          enabled: true
          var.paths: ["/var/log/audit/audit.log*"]

However the filebeat pod is constantly logging this warning, and it looks like audit logs are not being properly parsed. How can I fix this?

Error Message

WARN    [elasticsearch] elasticsearch/client.go:408     Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xc00062d41b517c67, ext:40117709164, loc:(*time.Location)(0x60807a0)}, Meta:{"pipeline":"filebeat-7.9.3-auditd-log-pipeline"}, Fields:{"agent":{"ephemeral_id":"d34786a7-ac0c-4472-8183-91cf50148272","hostname":"ip-x-x-x-x.x-x-1.compute.internal","id":"64c35ba3-62ce-4d48-a486-700dad774694","name":"ip-x-x-x-x.x-x-1.compute.internal","type":"filebeat","version":"7.9.3"},"apiVersion":"audit.k8s.io/v1","auditID":"74cfb2e9-9ab7-4716-9c1b-90e8c9da19ba","ecs":{"version":"1.5.0"},"event":{"dataset":"auditd.log","module":"auditd"},"fileset":{"name":"log"},"host":{"name":"ip-x-x-x-x.x-x-1.compute.internal"},"input":{"type":"log"},"kind":"Event","level":"Metadata","log":{"file":{"path":"/var/log/audit/audit.log.2"},"offset":8567725},"requestReceivedTimestamp":"2021-02-08T10:34:56.335421Z","requestURI":"/api/v1/namespaces/default/pods?limit=500","responseStatus":{"code":200,"metadata":{}},"service":{"type":"auditd"},"sourceIPs":["x.x.x.x"],"stage":"ResponseComplete","stageTimestamp":"2021-02-08T10:34:56.339951Z","user":{"groups":["oidc:x-x-x-x","oidc:x","oidc:x-x"],"username":"x"},"userAgent":"kubectl/v1.20.2 (linux/amd64) kubernetes/faecb19","verb":"get"}, Private:file.State{Id:"native::2213825-51713", PrevId:"", Finished:false, Fileinfo:(*os.fileStat)(0xc0007e88f0), Source:"/var/log/audit/audit.log.2", Offset:8568284, Timestamp:time.Time{wall:0xc00062d41ae5c4ec, ext:40110649610, loc:(*time.Location)(0x60807a0)}, TTL:-1, Type:"log", Meta:map[string]string(nil), FileStateOS:file.StateOS{Inode:0x21c7c1, Device:0xca01}, IdentifierName:"native"}, TimeSeries:false}, Flags:0x1, Cache:publisher.EventCache{m:common.MapStr(nil)}} (status=400): {"type":"mapper_parsing_exception","reason":"failed to parse field [error] of type [keyword] in document with id '8TA1gXcBjB55RMTVZnF5'. Preview of field's value: '{message=field [message] not present as part of path [message]}'","caused_by":{"type":"illegal_state_exception","reason":"Can't get text on a START_OBJECT at 1:340"}}

Environment

  • Kubernetes 1.15.7
  • filebeat 7.9.3

Hi!

Can you provide your full k8s manifests you use to deploy Filebeat?
Also 2 things to check:

  1. check if there is any leftover template in your Elasticsearch from previous Filebeat's versions and which could cause conflicts in the fields. You are actually seeing a field mapping issue but we dont know the reason for this.
  2. check in Elasticsearch's logs for something related to this. Usually when the issue occur on Filebeat you will sth relevant in Elasticsearch's logs which failed to ingest the event. Let's see if there is something useful there.

C.

Full filebeat manifest below. I will check both of those places and post as well.

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: monitor
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
k8s-app: filebeat
rules:
- apiGroups: [""]  # "" indicates the core API group
  resources:
  - namespaces
  - pods
  verbs:
  - get
  - watch
  - list
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: monitor
  labels:
k8s-app: filebeat
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: monitor
  labels:
app: filebeat
spec:
  selector:
matchLabels:
  app: filebeat
  template:
metadata:
  labels:
    app: filebeat
  name: filebeat
spec:
  serviceAccountName: filebeat
  tolerations:
  - operator: Exists
  containers:
  - name: filebeat
    image: registry.x.x/beats/filebeat:7.9.3
    imagePullPolicy: IfNotPresent
    args: [
      "-c", "/etc/filebeat/filebeat.yml",
      "-e",
    ]
    securityContext:
      runAsUser: 0
    resources:
      limits:
        memory: 256Mi
        cpu: "2"
      requests:
        cpu: "1"
        memory: 256Mi
    env:
    - name: NODE_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: spec.nodeName
    volumeMounts:
    - name: config
      mountPath: /etc/filebeat
      readOnly: true
    - name: data
      mountPath: /usr/share/filebeat/data
    - name: varlibdockercontainers
      mountPath: /var/lib/docker/containers
      readOnly: true
    - name: varlog
      mountPath: /var/log
  - name: beats-exporter
    image: registry.x.x/beats/beat-exporter:0.3.0
    imagePullPolicy: IfNotPresent
    resources:
      limits:
        memory: 32Mi
      requests:
        memory: 32Mi
    ports:
    - containerPort: 9479
      hostPort: 9479
  hostNetwork: true
  dnsPolicy: ClusterFirstWithHostNet
  volumes:
  - name: config
    configMap:
      defaultMode: 0600
      name: filebeat
      items:
      - key: filebeat.yml
        path: filebeat.yml
      - key: ilm_policy.json
        path: ilm_policy.json
      - key: fields.yml
        path: fields.yml
  - name: data
    # avoid data resubmission and store registy on a hostPath
    hostPath:
      path: /var/run/filebeat
  - name: varlibdockercontainers
    hostPath:
      path: /var/lib/docker/containers
  - name: varlog
    hostPath:
      path: /var/log
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: ilm-policy
  namespace: monitor
data:
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-fields
  namespace: monitor
  labels:
app: filebeat
data:
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat
  namespace: monitor
data:
  filebeat.yml: |+
filebeat.modules:
  - module: auditd
    log:
      enabled: true
      var.paths: ["/var/log/audit/audit.log*"]

filebeat.inputs:
  - type: log
    paths:
      - /var/log/syslog
      - /var/log/messages
    include_lines: ["kubelet:", "kernel:"]
    fields_under_root: true
  - type: container
    paths:
      - /var/lib/docker/containers/*/*.log
    fields_under_root: true
    multiline:
      # Java exceptions
      pattern: '^[[:space:]]+(at|\.{3})\b|^Caused by:|^\+'
      negate: false
      match: after
    processors:
      - add_kubernetes_metadata:
          labels.dedot: true
          host: ${NODE_NAME}
          matchers:
            - logs_path:
                logs_path: "/var/lib/docker/containers/"
      - drop_event:
          when.or:
            - equals: { kubernetes.labels.app: jenkins }
            - equals: { kubernetes.labels.app: sonarqube }

filebeat.autodiscover:
  providers:
    - type: kubernetes
      node: ${NODE_NAME}
      labels.dedot: true
      templates:
        - condition.or:
            - equals: { kubernetes.labels.app: jenkins }
            - equals: { kubernetes.labels.app: sonarqube }
          config:
            - type: container
              paths:
                - /var/lib/docker/containers/${data.kubernetes.container.id}/*.log
              fields_under_root: true
              multiline:
                # Jenkins|sonarqube logs parser
                pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}[[:space:]]+[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]+.*\[id=[0-9]+\]|^[0-9]{4}.[0-9]{2}.[0-9]{2}\s[0-9]{2}:[0-9]{2}:[0-9]{2}.*time=.*ms'
                negate: true
                match: after

processors:
  - decode_json_fields:
      when.regexp.message: "^{"
      fields: ["message"]
      target: ""
      overwrite_keys: true
  - drop_event:
      when.or:
        - equals: { level: DEBUG }
        - equals: { kubernetes.labels.app: eoschecker }
        - and:
            - equals: { kubernetes.labels.app_kubernetes_io/name: ingress-nginx }
            - regexp: { message: "^.*Error obtaining Endpoints for Service.*$" }
  - drop_fields:
      when.regexp.message: "^{"
      fields: ["message", "timestamp"]
  - drop_fields:
      when.equals: {input.type: container}
      fields:
        - agent.ephemeral_id
        - agent.hostname
        - agent.id
        - agent.name
        - agent.type
        - agent.version
        - container.image.name
        - container.runtime
        - ecs.version
        - log.file.path
        - log.offset
  - dissect:
      when:
        has_fields: ["kubernetes.labels.country"]
      tokenizer: "%{namespace}"
      field: "kubernetes.labels.country"
      target_prefix: ""
  - dissect:
      tokenizer: "%{namespace}"
      field: "kubernetes.namespace"
      target_prefix: ""
  - rename:
      fields:
        - from: header.x-request-id
          to: x-request-id
        - from: headers.x-request-id
          to: x-request-id
        - from: headers.X-Request-ID
          to: x-request-id
        - from: headers.X-cus
          to: x-cus
        - from: headers.x-cus
          to: x-cus
        - from: headers.X-Cus
          to: x-cus
        - from: headers.X-CUS
          to: x-cus
        - from: headers.X-dev
          to: x-dev
        - from: headers.X-Dev
          to: x-dev
        - from: headers.x-dev
          to: x-dev
        - from: headers.x-session-id
          to: x-session-id
        - from: headers.X-session-id
          to: x-session-id
        - from: headers.X-Session-Id
          to: x-session-id
        - from: header.X-cus
          to: x-cus
        - from: header.X-Cus
          to: x-cus
        - from: header.X-dev
          to: x-dev
        - from: header.X-Dev
          to: x-dev
        - from: mdc.x-request-id
          to: x-request-id
        - from: mdc.customerId
          to: x-cus
        - from: mdc.x-cus
          to: x-cus
        - from: mdc.x-dev
          to: x-dev
        - from: mdc.transaction.id
          to: transaction.id
        - from: mdc.trace.id
          to: trace.id
        - from: mdc.error.id
          to: error.id
      ignore_missing: true
      fail_on_error: false
  - truncate_fields:
      fields:
        - exception
      max_bytes: 10000
      fail_on_error: false
      ignore_missing: true

queue.mem:
  events: 1024
  flush.min_events: 512

output.elasticsearch:
  hosts: ["logs-es-http.elastic-system.svc:9200"]

setup.ilm:
  policy_file: "/etc/filebeat/ilm_policy.json"
  overwrite: true
  rollover_alias: "filebeat-custom-%{[agent.version]}"

setup.template:
  enabled: true
  overwrite: true
  fields: "/etc/filebeat/fields.yml"
  settings:
    index.number_of_shards: 6
    index.number_of_replicas: 4

logging.level: warning
# Beat-exporter setting for exposing (file|metrics|*)beat http endpoint at given port.
http:
  enabled: true
  host: localhost
  port: 5066

  fields.yml: |-
- key: custom
  title: TEST
  fields:
  - name: '@timestamp'
    level: core
    required: true
    type: date
  - name: message
    level: core
    type: text

  ilm_policy.json: |-
{
  "policy": {
    "phases": {
      "hot": {
        "min_age": "0ms",
        "actions": {
          "rollover": {
            "max_age": "3d"
          },
          "set_priority": {
            "priority": 50
          }
        }
      },
      "warm": {
        "actions": {
          "allocate": {
            "number_of_replicas": 1,
            "include": {},
            "exclude": {},
            "require": {
              "data": "warm"
            }
          },
          "forcemerge": {
            "max_num_segments": 1
          },
          "set_priority": {
            "priority": 25
          },
          "shrink": {
            "number_of_shards": 2
          }
        }
      },
      "cold": {
        "min_age": "5d",
        "actions": {
          "allocate": {
            "require": {
              "data": "warm"
            }
          },
          "freeze": {},
          "set_priority": {
            "priority": 5
          }
        }
      },
      "delete": {
        "min_age": "30d",
        "actions": {
          "delete": {
            "delete_searchable_snapshot": true
          }
        }
      }
    }
  }
}

Here is what I found on searching indexes:

curl -X GET "http://logs-es-http.elastic-system.svc:9200/_index_template/?pretty"
{
  "index_templates" : [
    {
      "name" : "metrics",
      "index_template" : {
        "index_patterns" : [
          "metrics-*-*"
        ],
        "composed_of" : [
          "metrics-mappings",
          "metrics-settings"
        ],
        "priority" : 100,
        "version" : 0,
        "_meta" : {
          "description" : "default metrics template installed by x-pack",
          "managed" : true
        },
        "data_stream" : { }
      }
    },
    {
      "name" : "logs",
      "index_template" : {
        "index_patterns" : [
          "logs-*-*"
        ],
        "composed_of" : [
          "logs-mappings",
          "logs-settings"
        ],
        "priority" : 100,
        "version" : 0,
        "_meta" : {
          "description" : "default logs template installed by x-pack",
          "managed" : true
        },
        "data_stream" : { }
      }
    }
  ]
}

I also spent some time looking through logs and found nothing related to the current problem unfortunately.

I should mention that we are sending kubernetes audit logs to hostpath so they can be collected by filebeat, and these seem to be the ones that filebeat is generally having trouble with for some reason.

EDIT
If it's helpful, here are some mappings relevant to auditd:

curl -X GET "localhost:9200/_mapping?pretty" 

 "auditd" : {
          "properties" : {
            "log" : {
              "properties" : {
                "a0" : {
                  "type" : "keyword",
                  "ignore_above" : 1024
                },
                "a1" : {
                  "type" : "keyword",
                  "ignore_above" : 1024
                },
                "a2" : {
                  "type" : "keyword",
                  "ignore_above" : 1024
                },
                "a3" : {
                  "type" : "keyword",
                  "ignore_above" : 1024
                },
                "entries" : {
                  "type" : "keyword",
                  "ignore_above" : 1024
                },
                "family" : {
                  "type" : "keyword",
                  "ignore_above" : 1024
                },
                "grantors" : {
                  "type" : "keyword",
                  "ignore_above" : 1024
                },
                "items" : {
                  "type" : "keyword",
                  "ignore_above" : 1024
                },
                "old-auid" : {
                  "type" : "keyword",
                  "ignore_above" : 1024
                },
                "old-ses" : {
                  "type" : "keyword",
                  "ignore_above" : 1024
                },
                "op" : {
                  "type" : "keyword",
                  "ignore_above" : 1024
                },
                "proctitle" : {
                  "type" : "keyword",
                  "ignore_above" : 1024
                },
                "sequence" : {
                  "type" : "long"
                },
                "ses" : {
                  "type" : "keyword",
                  "ignore_above" : 1024
                },
                "success" : {
                  "type" : "keyword",
                  "ignore_above" : 1024
                },
                "syscall" : {
                  "type" : "keyword",
                  "ignore_above" : 1024
                },
                "table" : {
                  "type" : "keyword",
                  "ignore_above" : 1024
                },
                "tty" : {
                  "type" : "keyword",
                  "ignore_above" : 1024
                },
                "unit" : {
                  "type" : "keyword",
                  "ignore_above" : 1024
                }
              }
            }
          }
        },

Am I correct in thinking there should be a mapping for "error" that is not there, or am I misunderstanding?

@adrisr does this rings any bells to you?

Looks like your mapping for "error" (at top level, not inside "auditd" object) is wrong.

Filebeat is trying to index a document with "error":{"message":"..."} but ES expects "error" to be a keyword, which is probably caused by an index mapping missing ECS fields.

Did you setup your index templates by running filebeat setup before ingesting data?

Since we are running in Kubernetes, I believe this section of the manifest should do that?

Ok, I ended up just adding a new indice for audit logs and it seems to work now:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.