Just upgraded elastic stack from 7.10.1 to 7.14.1, and now filebeat doesn't work!
Filebeat logs repeats the following lines:
filebeat | 2021-09-22T21:57:46.145Z INFO [esclientleg] eslegclient/connection.go:273 Attempting to connect to Elasticsearch version 7.14.1
filebeat | 2021-09-22T21:57:46.278Z INFO template/load.go:111 Template "log" already exists and will not be overwritten.
filebeat | 2021-09-22T21:57:46.278Z INFO [index-management] idxmgmt/std.go:297 Loaded index template.
filebeat | 2021-09-22T21:57:46.280Z INFO [publisher_pipeline_output] pipeline/output.go:151 Connection to backoff(elasticsearch(http://elasticsearch:9200)) established
filebeat | 2021-09-22T21:57:46.332Z INFO [publisher] pipeline/retry.go:219 retryer: send unwait signal to consumer
filebeat | 2021-09-22T21:57:46.332Z INFO [publisher] pipeline/retry.go:223 done
filebeat | 2021-09-22T21:57:47.527Z ERROR [publisher_pipeline_output] pipeline/output.go:180 failed to publish events: temporary bulk send failure
filebeat | 2021-09-22T21:57:47.528Z INFO [publisher_pipeline_output] pipeline/output.go:143 Connecting to backoff(elasticsearch(http://elasticsearch:9200))
filebeat | 2021-09-22T21:57:47.528Z INFO [publisher] pipeline/retry.go:219 retryer: send unwait signal to consumer
filebeat | 2021-09-22T21:57:47.528Z INFO [publisher] pipeline/retry.go:223 done
filebeat.yml:
output.elasticsearch:
hosts: ["http://elasticsearch:9200"]
index: "log-%{[docker][container][labels][app]}"
pipeline: pipeline_logs
ilm.enabled: false
queue.mem:
events: 1024
flush.min_events: 256
flush.timeout: 1s
setup.ilm.enabled: false
setup.template.name: "log"
setup.template.pattern: "log-*"
setup.kibana.host: http://kibana:5601
filebeat.modules:
- module: system
syslog:
enabled: true
var.paths: ["/hostfs/var/log/syslog*"]
filebeat.autodiscover:
providers:
- type: docker
hints.enabled: true
templates:
- condition:
contains:
docker.container.image: swag
config:
- module: nginx
access:
enabled: true
var.paths: [ "/config/log/nginx/access.log*" ]
error:
enabled: true
var.paths: [ "/config/log/nginx/error.log*" ]
filebeat docker-compose service:
filebeat:
image: store/elastic/filebeat:7.14.1
container_name: filebeat
mem_limit: 128m
entrypoint: /usr/share/filebeat/config/init/entry.sh
volumes:
- ./config/filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
- ./config/filebeat/init:/usr/share/filebeat/config/init:ro
- ./config/filebeat/pipelines:/usr/share/filebeat/config/pipelines:ro
- ./config/filebeat/index:/usr/share/filebeat/config/index:ro
- /var/lib/docker/containers/:/var/lib/docker/containers/:ro #read logs from containers
- /var/run/docker.sock:/var/run/docker.sock:ro #docker daemon listener
- filebeat-data:/usr/share/filebeat/data
- entry-data:/config:ro
- /var/log:/hostfs/var/log:ro
user: root
labels:
app: filebeat
logging:
driver: 'json-file'
options:
max-size: '20m'
max-file: '5'
compress: 'true'
Elasticsearch logs shows a warning related to filebeat
docker logs elasticsearch --tail 1000 | grep filebeat
{
"type": "server",
"timestamp": "2021-09-22T21:08:34,295Z",
"level": "WARN",
"component": "o.e.c.m.MetadataIndexTemplateService",
"cluster.name": "docker-cluster",
"node.name": "036312a73667",
"message": "index template [log-filebeat-template] has index patterns [log-filebeat] matching patterns from existing older templates [log] with patterns (log => [log-*]); this template [log-filebeat-template] will take precedence during new index creation",
"cluster.uuid": "Uvr-gj7UQx-CGv-lBaMXUw",
"node.id": "6ego0coRQdS96SIFGP7mMQ"
}
There are no ERRORS in elasticsearch logs.
Note that metricbeat works as expected, metrics are stored and can be retrieved.