Docker logs includes unreadable in Kibana

Hi,
I am using filebeat version 7.17.3 running on Ubuntu 18.04.6 LTS to ship docker logs using filebeat installation on the host OS. I am not using a filebeat container.
I noticed a couple of things that I would like to remediate and I couldn't find an answer for in the documentation.
My .yml file is

setup.ilm.enabled: auto
setup.template.name: "myindex"
setup.template.pattern: "myindex-*"
setup.ilm.pattern: "{now/d}-000001"
setup.ilm.rollover_alias: "myindex"
# =======================Containers Settings=================================

filebeat.autodiscover:
  providers:
      - type: docker
        hints.enabled: true
        hints.default_config:
          type: container
          paths:
            - /var/lib/docker/*/containers/*/*.log
        exclude_lines: ["^\\s+[\\-`('.|_]"]
        json.key_under_root: true
        json.ignore_decoding_error: true  
        json.add_error_key: true
        json.message_key: log

filebeat.inputs:
- type: log
  enabled: true
  paths:
   - /home/username/documents/projects/filebeat/*.log
processors:
  - add_docker_metadata: ~

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: true
  reload.period: 10s
# ======================= Elasticsearch template setting =======================
setup.kibana:
output.elasticsearch:
  hosts: ["REMOVED"]
  index: "unity-%{+yyyy.MM.dd}"
  ssl.certificate_authorities: ["REMOVED"]
processors:
- decode_json_fields:
      fields: ["message"]
      process_array: false
      max_depth: 2
      target: ""
      overwrite_keys: true
- rename:
      fields:
        - from: "source"
          to: "msource"
      ignore_missing: true
      fail_on_error: false

cloud.id: "${ES_USER}" 
cloud.auth: "${ES_PASS}" 

While the logs are shipped to Kibana, they are not readable as they include the terminal color codes like “ [1;34mbin[m [1;34mctmp[m [1;34mdev[m [1;34metc[m [1;34mhome[m [1;34mlib[m [1;34mmedia[m [1;34mmnt[m [1;34mopt[m [1;34mproc[m [1;34mroot[m [1;34mrun[m [1;34msbin[m [1;34msrv[m [1;34msys[m [1;34mtmp[m [1;34musr[m [1;34mvar[m” is what I see when I run ls -la inside a container.

How can I only show the text?
I also don't see any logs regarding to containers starting or stopping. Is this an expected behaviour?

Thanks

Hello @dev9

Indeed this is the actual content of the log entry, you either have to disable coloured output in the docker container, or add a script processor (Script Processor | Filebeat Reference [7.17] | Elastic) in order to strip the control chars

I found a regex that seems to do the work at How to remove ^[, and all of the ANSI escape sequences in a file using linux shell scripting - Stack Overflow

printf '\e[31m%s\e[0m' "this is in red"|sed 's/\x1B\[[0-9;]*[JKmsu]//g'

You have to port it to a script processor

Hi Andrea,
Is this the only option I have to sanitize the logs even though I am using the official provider?
Any idea why I only see logs if I exec inside a container? or How to reduce the number of documents added to our index. For example, 24 records get added if I do ls -la inside a container because filebeat sends every line as its own record.

I managed to remove the ascii characters and cleanup the kibana entries using the processor script. In case anyone else has the same issue. the solution is

setup.ilm.enabled: auto
setup.template.name: "myindex"
setup.template.pattern: "myindex-*"
setup.ilm.pattern: "{now/d}-000001"
setup.ilm.rollover_alias: "myindex_ILM"
# =======================Containers Settings=================================

filebeat.autodiscover:
  providers:
      - type: docker
        hints.enabled: true
        hints.default_config:
          type: container
          paths:
            - /var/lib/docker/*/containers/*/*.log
        exclude_lines: ["^\\s+[\\-`('.|_]"]
        json.key_under_root: true
        json.ignore_decoding_error: true  
        json.add_error_key: true
        json.message_key: log

filebeat.inputs:
- type: log
  enabled: true
  paths:
   - /home/USER/documents/projects/filebeat/*.log

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: true
  reload.period: 10s
# ======================= Elasticsearch template setting =======================
setup.kibana:
output.elasticsearch:
  hosts: ["REMOVED"]
  index: "myindex-%{+yyyy.MM.dd}"
  ssl.certificate_authorities: [REMOVED]
processors:
- decode_json_fields:
      fields: ["message"]
      process_array: false
      max_depth: 2
      target: ""
      overwrite_keys: true
- rename:
      fields:
        - from: "source"
          to: "msource"
      ignore_missing: true
      fail_on_error: false
- script:
    lang: javascript
    source: >
      function process(event){
        var regex = new RegExp('\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])','g');
        var clean = event.Get('message');
        clean = clean.replace(regex, '');
        event.Put('message',clean);    
        return event;
      }
    
cloud.id: "${ES_USER}" 
cloud.auth: "${ES_PASS}" 

I still however only see events logged if I exec inside the container. None of the container states is logged.

Hello @dev9 ,

filebeat will collects logs generated by the containers, not the state of the containers

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.