Fluent-bit microservice getting restart frequently

Hi Team,

We have created configmap as below, fluent-bit MS is getting terminate and startup frequently and i could see that it is throwing below error in the logs. After restart it indexing fine and also no buffer *.flb files in the corresponding buffer path. When i explored it says due to unformat _bulk request it might have reject, but we can control or overcome this issue. each and every time we cant modify the logs and we expect the logs should be picked and indexed without any manual intervention.

Error log:

[error] [output:es:es.0] HTTP status=400 URI=/_bulk, response:
{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"Malformed action/metadata line [65], expected START_OBJECT or END_OBJECT but found [VALUE_STRING]"}],"type":"illegal_argument_exception","reason":"Malformed action/metadata line [65], expected START_OBJECT or END_OBJECT but found [VALUE_STRING]"},"status":400}

Configmap:
  filter-kubernetes.conf: |
    [FILTER]
        Name                kubernetes
        Match               kubernetes.*
        Kube_URL            https://XXXX
        Kube_CA_File        <path>
        Kube_Token_File     <path>
        Merge_Log           On
        Merge_Log_Key       log_processed
        K8S-Logging.Parser  On
        K8S-Logging.Exclude Off
  fluent-bit.conf: |
    [SERVICE]
        Flush         1
        Log_Level     info
        Daemon        off
        Parsers_File  parsers.conf
        HTTP_Server   On
        HTTP_Listen   0.0.0.0
        HTTP_Port     2020
        storage.path  /var/log/flb/
        storage.sync  normal
        storage.checksum          off
        storage.backlog.mem_limit 5M
        storage.max_chunks_up   512
  input-kubernetes.conf: |
    [INPUT]
        Name              tail
        Tag               kubernetes.*
        Path              /var/log/containers/*.log
        Parser            docker
        DB                /var/log/flb_kube.db
        Skip_Long_Lines   Off
        Refresh_Interval  5
        storage.type      filesystem
        Buffer_Chunk_Size 32KB
        Buffer_Max_Size   128KB
        Read_from_Head    True
  output-elasticsearch.conf: |
    [OUTPUT]
        Name            es
        Match           *
        Host            ${FLUENT_ELASTICSEARCH_HOST}
        Port            ${FLUENT_ELASTICSEARCH_PORT}
        HTTP_User       fluentd
        HTTP_Passwd     fluentd
        tls             On
        tls.verify      Off
        Include_Tag_Key true
        Tag_Key         tag
        Logstash_Format On
        Replace_Dots    On
        Retry_Limit     5
        Buffer_Size    128KB
        storage.total_limit_size  2G
        workers        3

You'd need to chat to the fluentbit developers about that, it's not something we can help with.

That suggests you're not passing Elasticsearch an object when it expects one in its mapping. That's something that is coming from your data or from fluentbit, which we can't help with.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.