Why kubernetes' statefulset didn't run evenly for 3 pods?

I deployed a logstash by statefulset kind with 3 replicas in k8s. Using filebeat to send data to it.

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: logstash-nginx
spec:
  serviceName: "logstash"
  selector:
    matchLabels:
      app: logstash
  updateStrategy:
    type: RollingUpdate
  replicas: 3
  template:
    metadata:
      labels:
        app: logstash
    spec:
      containers:
      - name: logstash
        image: docker.elastic.co/logstash/logstash:7.10.0
        resources:
          limits:
            memory: 2Gi
        ports:
          - containerPort: 5044
        volumeMounts:
          - name: config-volume
            mountPath: /usr/share/logstash/config
          - name: logstash-pipeline-volume
            mountPath: /usr/share/logstash/pipeline
        command: ["/bin/sh","-c"]
        args:
          - bin/logstash -f /usr/share/logstash/pipeline/logstash.conf;
      volumes:
        - name: config-volume
          configMap:
            name: logstash-configmap
            items:
              - key: logstash.yml
                path: logstash.yml
        - name: logstash-pipeline-volume
          configMap:
            name: logstash-configmap
            items:
              - key: logstash.conf
                path: logstash.conf

Logstash's service

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: logstash
  name: logstash
spec:
  ports:
    - name: "5044"
      port: 5044
      targetPort: 5044
  selector:
    app: logstash

Filebeat's daemonset configmap

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  labels:
    k8s-app: filebeat
data:
  filebeat.yml: |-
    ...

    output.logstash:
      hosts: ["logstash.default.svc.cluster.local:5044"]
      loadbalance: true
      bulk_max_size: 1024

When run real data. Most data went to the second logstash's pod. Sometimes data also can go to the first and the third pods but very little occurs.

Is this the kubernetes usage issue? How to set evenly traffic to these 3 pods by k8s? Is it good to use internal load balancer?(Run on GKE)

apiVersion: v1
kind: Service
metadata:
  name: logstash
  annotations:
    cloud.google.com/load-balancer-type: "Internal"
  labels:
    app: logstash
spec:
  type: LoadBalancer
  selector:
    app: logstash
  ports:
    - name: "5044"
      port: 5044
      targetPort: 5044

Or is it filebeat(or logstash)'s issue? Can they check which node's resource is free then to beat data?

I'm not quite sure if there is a problem with Beats. It looks like your k8s setup needs some tweaking.

Yes, thank you for your reply. It seems the k8s' default service type clusterIP isn't using round-robin but random.

No issue with filebeat.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.