Filebeat failed to inject events in dynamic index name having docker metadata

Hi,

We have a Docker Swarm running a bunch of microservices.
To monitor their logs we have:

  • Filebeat reading docker logs based on a prospector configuration, enriching those logs with docker's metadata and pushing those logs to Logstash

  • Logstash receiving logs from Filebeat instances, dynamically building the Elasticsearch index based on Docker metadatas received from Filebeat and injecting those logs in Elasticsearch.

That configuration worked well but we decided to change it for two main reasons:

  1. Filebeat prospector will be deprecated in Filebeat 7.0
  2. Logstash was consumming our cluster resources for nor real added value.

So we decided to change filebeat configuration to use a filebeat.input of type docker and to inject logs directly from filebeat to elasticsearch.

Unfortunately, Filebeat as some issue generating dynamically the index name.

Here is our initial Filebeat.yml configuration:

filebeat.prospectors:
- type: log
  paths:
   - '/var/lib/docker/containers/*/*.log'
  json.message_key: log
  json.keys_under_root: true
  tail_files: true
  processors:
  - add_docker_metadata: ~
output.logstash:
  hosts: ["logstash:5044"]
logging.level: warning
xpack.monitoring:
  enabled: true
  elasticsearch:
    hosts: ["https://elastic-proxy:9200"]
    username: "beats_system"
    password: "@@@beats_system_password@@@"
    ssl.certificate_authorities: ["/usr/share/filebeat/config/our_cert.crt"]

And here is our Logstash initial configuration:

input {
  beats {
    port => 5044
    type => "docker_logs"
  }
}

filter {
  mutate {
    replace => ["host", "%{[beat][hostname]}"]
  }
}

output {
  if [type] == "docker_logs" {
    elasticsearch {
      ssl => true
      cacert => "/usr/share/logstash/config/our_cert.crt"
      user => "logstash_internal"
      password => "@@@logstash_internal_password@@@"
      hosts => ["https://elastic-proxy:9200"]
      index => "logs-%{[docker][container][labels][com][docker][swarm][service][name]}-%{[docker][container][labels][it][environment]}-%{+YYYY.MM}"
      template => "/usr/share/logstash/pipeline/logs_index_template.json"
      template_overwrite => true
    }
  }
}

And now, here is our new Filebeat configuration using input and replacing Logstash:

filebeat.inputs:
- type: docker
  containers:
    ids:
      - "*"
  encoding: us-ascii
  json.message_key: log
  json.keys_under_root: true
  json.ignore_decoding_error: true
  processors:
  - add_docker_metadata: ~
  - rename:
      fields:
        - from: "message"
          to: "log"
output:
  elasticsearch:
    hosts: ["https://elastic-proxy:9200"]
    index: "logs-%{[docker.container.labels.com.docker.swarm.service.name]}-%{[docker.container.labels.it.environment]}-%{+YYYY.MM}"
    ssl:
      certificate_authorities: ["/usr/share/filebeat/config/our_cert.crt"]
    username: "logstash_internal"
    password: "@@@logstash_internal_password@@@"
setup:
  template:
    enabled: true
    name: "logs"
    pattern: "logs-*"
    json:
      name: "logs"
      enabled: true
      path: "/usr/share/filebeat/config/logs_index_template.json"
xpack.monitoring:
  enabled: true
  elasticsearch:
    username: "beats_system"
    password: "@@@beats_system_password@@@"
    ssl.certificate_authorities: ["/usr/share/filebeat/config/our_cert.crt"]

When Filebeat is set with this last configuration, I can find the following in filebeat's output :

(status=404): {"type":"index_not_found_exception","reason":"no such index and [action.auto_create_index] ([.security,.monitoring*,.watches,.triggered_watches,.watcher-history*,.ml*,logs-*,assets,metricbeat-*]) **doesn't match","index_uuid":"_na_","index":""**}

Thanx for the help!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.