In logstash, I want to save logs to different outputs for each k8s namespace

Hello everyone!!

I'm recently struggling with filebeat and logstash log handling in a kubernetes env.

But my problem is that when the log below is left as logstash, I want to save it in a separate ES for each namespace, but I don't know how to configure the filter.
Please, I sincerely hope for your help.

Filebeat env:

apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: logging
  labels:
    app: filebeat
data:
  filebeat.yml: |-
    filebeat.autodiscover:
      providers:
        - type: kubernetes
          node: ${NODE_NAME}
          hints.enabled: true
          hints.default_config:
            type: container
            paths:
              - /var/log/containers/*${data.kubernetes.container.id}*.log
              # - /var/log/containers/nginx-deployment*.log
    processors:
      - add_cloud_metadata:
      - add_host_metadata:
    output.logstash:
      hosts: ${LOGSTASH_URL}

A message like the one below is logged in logstash.
And I want to save the log in different output for each namespace.

{"input":{"type":"container"},"stream":"stdout","host":{"ip":["192.168.0.127","fe80::f1:dcff:fe4c:a59e","192.168.0.165","fe80::d5:c3ff:fed5:4182","fe80::c8db:4cff:fed4:63cb","fe80::f0ad:cfff:feed:6c82","fe80::8cea:7ff:fe6f:eb8e","fe80::bc62:fbff:fe06:bb56","fe80::22:82ff:fea2:c348","192.168.0.202","fe80::d8:12ff:fe34:a856","fe80::94dc:35ff:fe83:ebbd","fe80::2003:65ff:fe89:f679"],"id":"3c2fb2b6af196dc020fd58ec005618e7","hostname":"ip-192-168-0-127.ap-northeast-2.compute.internal","architecture":"x86_64","mac":["02:f1:dc:4c:a5:9e","02:d5:c3:d5:41:82","ca:db:4c:d4:63:cb","f2:ad:cf:ed:6c:82","8e:ea:07:6f:eb:8e","be:62:fb:06:bb:56","02:22:82:a2:c3:48","02:d8:12:34:a8:56","96:dc:35:83:eb:bd","22:03:65:89:f6:79"],"os":{"version":"7 (Core)","codename":"Core","type":"linux","family":"redhat","kernel":"5.4.129-63.229.amzn2.x86_64","name":"CentOS Linux","platform":"centos"},"name":"ip-192-168-0-127.ap-northeast-2.compute.internal","containerized":true},"cloud":{"account":{"id":"239234376445"},"machine":{"type":"t3.medium"},"region":"ap-northeast-2","image":{"id":"ami-068bff1bf20952787"},"provider":"aws","instance":{"id":"i-056d30ea89dc17889"},"availability_zone":"ap-northeast-2a","service":{"name":"EC2"}},"agent":{"version":"7.14.0","ephemeral_id":"5b34cc4e-55a0-45b1-a43d-72d57ed81897","type":"filebeat","hostname":"ip-192-168-0-127.ap-northeast-2.compute.internal","id":"7b92a065-9bec-4f12-b6cd-1c7cfc2ee33b","name":"ip-192-168-0-127.ap-northeast-2.compute.internal"},"@timestamp":"2021-08-30T09:27:02.512Z","@version":"1","ecs":{"version":"1.10.0"},"kubernetes":{"namespace_uid":"aabe02a7-ca3d-44d8-ab52-47814079fbff","labels":{"app_kubernetes_io/name":"aws-efs-csi-driver","controller-revision-hash":"788687fd8b","app_kubernetes_io/instance":"aws-efs-csi-driver","pod-template-generation":"1","app":"efs-csi-node"},"namespace_labels":{"kubernetes_io/metadata_name":"kube-system"},"container":{"name":"liveness-probe"},"node":{"uid":"d7e6d010-d4b7-474b-8bb8-9a147f765c3c","hostname":"ip-192-168-0-127.ap-northeast-2.compute.internal","name":"ip-192-168-0-127.ap-northeast-2.compute.internal","labels":{"node_kubernetes_io/instance-type":"t3.medium","eks_amazonaws_com/nodegroup":"kube-practice","beta_kubernetes_io/os":"linux","topology_kubernetes_io/region":"ap-northeast-2","beta_kubernetes_io/arch":"amd64","failure-domain_beta_kubernetes_io/region":"ap-northeast-2","eks_amazonaws_com/capacityType":"SPOT","kubernetes_io/arch":"amd64","topology_kubernetes_io/zone":"ap-northeast-2a","kubernetes_io/os":"linux","failure-domain_beta_kubernetes_io/zone":"ap-northeast-2a","beta_kubernetes_io/instance-type":"t3.medium","kubernetes_io/hostname":"ip-192-168-0-127.ap-northeast-2.compute.internal","eks_amazonaws_com/nodegroup-image":"ami-068bff1bf20952787"}},"namespace":"kube-system","pod":{"ip":"192.168.0.127","uid":"33e05bda-9d24-4204-98ef-0ae41de45916","name":"efs-csi-node-md8zv"}},"message":"192.168.0.127 - - [30/Aug/2021:09:27:02 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.76.1\" \"-\"","tags":["beats_input_codec_plain_applied"],"container":{"id":"ef0343c19a2146f8f730daf336c9563445e2f2d5e05d1a651cae7cf9a21e493b","image":{"name":"public.ecr.aws/eks-distro/kubernetes-csi/livenessprobe:v2.2.0-eks-1-18-2"},"runtime":"docker"},"log":{"offset":183765,"file":{"path":"/var/log/containers/nginx-deployment-66b6c48dd5-sggfh_nm-5_nginx-d9431bf2be03a1fe2eadd99992ec650ab1d972c8847dae87fbdf0f991aa178a4.log"}}}

Assuming that you parse that JSON with a json filter or codec, you can reference the namespace_uid in the elasticsearch output:

index => "%{[kubernetes][namespace_uid]}

BUT, if that results in a large number of small indexes you may have performance problems. See this blog post from elastic to understand why. That recommends not doing exactly what you want to do.

It is probably going to be more efficient to put everything into one index and add a query clause that searches in the namespace_uid.

1 Like

Thanks for the reply.

My goal is to store them in different ElasticSearch for each k8s's namespace.
This is very important because of security reason.

I don't want to separate namespaces by index in one ElasticSearch.
If there is something I misunderstood, please let me know.

Do you want different elasticsearch indexes, or different elasticsearch instances? I already showed you how to do indexes.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.