Request entity too large / error in filebeat

I have filebeat installed on kubernetes, and elasticsearch installed on kubernetes also.
I got this error and I do not see logs in elasticsearch. Could you please tell me how can I solve the problem:

[elasticsearch]	elasticsearch/client.go:223	failed to perform any bulk index operations: 413 Request Entity Too Large: <html>
<head><title>413 Request Entity Too Large</title></head>
<body>
<center><h1>413 Request Entity Too Large</h1></center>
<hr><center>nginx/1.17.8</center>
</body>
</html>

What is your filebeat configuration?

Do you mean this:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: 
  namespace: {{ .Values.namespace }}
  labels:
    k8s-app: 
data:
  filebeat.yml: |-
    filebeat.modules:
      - module: system
        syslog:
          enabled: true
          #var.paths: ["/var/log/syslog"]
        auth:
          enabled: true
          #var.paths: ["/var/log/authlog"]

    filebeat.autodiscover:
      providers:
        - type: kubernetes
          templates:
            - condition:
                equals:
                  kubernetes.namespace: {{ .Values.namespace }}
              config:
                - type: docker
                  fields:
                    installation: ${INSTALLATION_NAME}
                  containers.ids:
                    - "${data.kubernetes.container.id}"
                  multiline.pattern: '^\{'
                  multiline.negate: true
                  multiline.match: after  
                  processors:
                    - decode_json_fields:
                        fields: message
                        target: ""
                        overwrite_keys: true
                    - drop_event:
                        when.and:
                          - or:
                            - regexp:
                                "kubernetes.container.name": 
                            - regexp:
                                "kubernetes.container.name": 
                            - regexp:
                                "kubernetes.container.name": 
                          - or:
                            - equals:
                                logger_name: 
                            - equals:
                                logger_name: 
                            - equals:
                                logger_name: 
                            - equals:
                                logger_name: 
                            - equals:
                                logger_name: 
                                logger_name: 
                    - drop_fields:
                        fields: 
                         
    #filebeat.autodiscover:
    #  providers:
    #    - type: kubernetes
    #      hints.enabled: true

    output.elasticsearch:
      hosts: {{ .Values.elastic.hosts }}
      username: {{ .Values.elastic.user }}
      password: ${ELASTICSEARCH_PASSWORD}

    logging.level: info

Yes I meant the filebeat.yml configuration which you provided.

Is there a way to run filebeat in debug mode?

See

I have logging level: info , which logs everything, according to the:
info - Logs informational messages, including the number of events that are published. Also logs any warnings, errors, or critical errors.

Could you tell me which logging level you state to use ?

debug

After setting up the debug mode I had to restart my filebeat.
I saw that the problem gone now and I see messages on elastic, but i had the same situation yesterday. So the problem will back.
Could you tell me please, is this option bulk_max_size relative to my problem ?
The default value is 50 , so maybe increase the valuse ?

but probably this could be better ? -> http.max_content_length

I'd decrease it instead.

An http request should not exceed 100mb (default limit).

1 Like

This looks like it is coming from kubernetes, not Elasticsearch? Is there anything in the Elasticsearch logs?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.