Filebeat modules config: failed to publish events: temporary bulk send failure

Just upgraded elastic stack from 7.10.1 to 7.14.1, and now filebeat doesn't work!
Filebeat logs repeats the following lines:

filebeat         | 2021-09-22T21:57:46.145Z     INFO    [esclientleg]   eslegclient/connection.go:273   Attempting to connect to Elasticsearch version 7.14.1
filebeat         | 2021-09-22T21:57:46.278Z     INFO    template/load.go:111    Template "log" already exists and will not be overwritten.
filebeat         | 2021-09-22T21:57:46.278Z     INFO    [index-management]      idxmgmt/std.go:297      Loaded index template.
filebeat         | 2021-09-22T21:57:46.280Z     INFO    [publisher_pipeline_output]     pipeline/output.go:151  Connection to backoff(elasticsearch(http://elasticsearch:9200)) established
filebeat         | 2021-09-22T21:57:46.332Z     INFO    [publisher]     pipeline/retry.go:219   retryer: send unwait signal to consumer
filebeat         | 2021-09-22T21:57:46.332Z     INFO    [publisher]     pipeline/retry.go:223     done
filebeat         | 2021-09-22T21:57:47.527Z     ERROR   [publisher_pipeline_output]     pipeline/output.go:180  failed to publish events: temporary bulk send failure
filebeat         | 2021-09-22T21:57:47.528Z     INFO    [publisher_pipeline_output]     pipeline/output.go:143  Connecting to backoff(elasticsearch(http://elasticsearch:9200))
filebeat         | 2021-09-22T21:57:47.528Z     INFO    [publisher]     pipeline/retry.go:219   retryer: send unwait signal to consumer
filebeat         | 2021-09-22T21:57:47.528Z     INFO    [publisher]     pipeline/retry.go:223     done

filebeat.yml:

output.elasticsearch:
  hosts: ["http://elasticsearch:9200"]
  index: "log-%{[docker][container][labels][app]}"
  pipeline: pipeline_logs
  ilm.enabled: false

queue.mem:
  events: 1024
  flush.min_events: 256
  flush.timeout: 1s

setup.ilm.enabled: false
setup.template.name: "log"
setup.template.pattern: "log-*"
setup.kibana.host: http://kibana:5601

filebeat.modules:
  - module: system
    syslog:
      enabled: true
      var.paths: ["/hostfs/var/log/syslog*"]

filebeat.autodiscover:
  providers:
    - type: docker
      hints.enabled: true
      templates:
        - condition:
            contains:
              docker.container.image: swag
          config:
            - module: nginx
              access:
                enabled: true
                var.paths: [ "/config/log/nginx/access.log*" ]
              error:
                enabled: true
                var.paths: [ "/config/log/nginx/error.log*" ]

filebeat docker-compose service:

  filebeat:
    image: store/elastic/filebeat:7.14.1
    container_name: filebeat
    mem_limit: 128m
    entrypoint: /usr/share/filebeat/config/init/entry.sh
    volumes:
      - ./config/filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
      - ./config/filebeat/init:/usr/share/filebeat/config/init:ro
      - ./config/filebeat/pipelines:/usr/share/filebeat/config/pipelines:ro
      - ./config/filebeat/index:/usr/share/filebeat/config/index:ro
      - /var/lib/docker/containers/:/var/lib/docker/containers/:ro #read logs from containers
      - /var/run/docker.sock:/var/run/docker.sock:ro #docker daemon listener
      - filebeat-data:/usr/share/filebeat/data
      - entry-data:/config:ro
      - /var/log:/hostfs/var/log:ro
    user: root
    labels:
      app: filebeat
    logging:
      driver: 'json-file'
      options:
        max-size: '20m'
        max-file: '5'
        compress: 'true'

Elasticsearch logs shows a warning related to filebeat

docker logs Elasticsearch --tail 1000 | grep filebeat

{
    "type": "server",
    "timestamp": "2021-09-22T21:08:34,295Z",
    "level": "WARN",
    "component": "o.e.c.m.MetadataIndexTemplateService",
    "cluster.name": "docker-cluster",
    "node.name": "036312a73667",
    "message": "index template [log-filebeat-template] has index patterns [log-filebeat] matching patterns from existing older templates [log] with patterns (log => [log-*]); this template [log-filebeat-template] will take precedence during new index creation",
    "cluster.uuid": "Uvr-gj7UQx-CGv-lBaMXUw",
    "node.id": "6ego0coRQdS96SIFGP7mMQ"
}

There are no ERRORS in Elasticsearch logs.

Note that metricbeat works as expected, metrics are stored and can be retrieved.

It seems that the log index template was made by the old version and filebeat will not change it.
I've deleted it and changed filebeat.yml to include filebeat version:

output.elasticsearch:
  hosts: ["http://elasticsearch:9200"]
  index: "log-%{[agent.version]}-%{[docker][container][labels][app]}"
 ....
setup.template.name: "log-%{[agent.version]}"
setup.template.pattern: "log-%{[agent.version]}-*"

Now the same issue, just logging changed to Template "log-7.14.1":

filebeat         | 2021-09-22T23:01:48.746Z     INFO    [esclientleg]   eslegclient/connection.go:273   Attempting to connect to Elasticsearch version 7.14.1
filebeat         | 2021-09-22T23:01:48.784Z     INFO    template/load.go:111    Template "log-7.14.1" already exists and will not be overwritten.
filebeat         | 2021-09-22T23:01:48.784Z     INFO    [index-management]      idxmgmt/std.go:297      Loaded index template.
filebeat         | 2021-09-22T23:01:48.814Z     INFO    [publisher_pipeline_output]     pipeline/output.go:151  Connection to backoff(elasticsearch(http://elasticsearch:9200)) established
filebeat         | 2021-09-22T23:01:48.830Z     INFO    [publisher]     pipeline/retry.go:219   retryer: send unwait signal to consumer
filebeat         | 2021-09-22T23:01:48.830Z     INFO    [publisher]     pipeline/retry.go:223     done
filebeat         | 2021-09-22T23:01:50.039Z     ERROR   [publisher_pipeline_output]     pipeline/output.go:180  failed to publish events: temporary bulk send failure
filebeat         | 2021-09-22T23:01:50.039Z     INFO    [publisher_pipeline_output]     pipeline/output.go:143  Connecting to backoff(elasticsearch(http://elasticsearch:9200))
filebeat         | 2021-09-22T23:01:50.039Z     INFO    [publisher]     pipeline/retry.go:219   retryer: send unwait signal to consumer
filebeat         | 2021-09-22T23:01:50.040Z     INFO    [publisher]     pipeline/retry.go:223     done

Removed template index altogether with setup.template.enabed: false and same problem:

filebeat         | 2021-09-22T23:15:46.441Z     INFO    [esclientleg]   eslegclient/connection.go:273   Attempting to connect to Elasticsearch version 7.14.1
filebeat         | 2021-09-22T23:15:46.540Z     INFO    [publisher_pipeline_output]     pipeline/output.go:151  Connection to backoff(elasticsearch(http://elasticsearch:9200)) established
filebeat         | 2021-09-22T23:15:46.540Z     INFO    [publisher]     pipeline/retry.go:213   retryer: send wait signal to consumer
filebeat         | 2021-09-22T23:15:46.540Z     INFO    [publisher]     pipeline/retry.go:217     done
filebeat         | 2021-09-22T23:15:47.748Z     ERROR   [publisher_pipeline_output]     pipeline/output.go:180  failed to publish events: temporary bulk send failure
filebeat         | 2021-09-22T23:15:47.749Z     INFO    [publisher_pipeline_output]     pipeline/output.go:143  Connecting to backoff(elasticsearch(http://elasticsearch:9200))
filebeat         | 2021-09-22T23:15:47.749Z     INFO    [publisher]     pipeline/retry.go:213   retryer: send wait signal to consumer
filebeat         | 2021-09-22T23:15:47.749Z     INFO    [publisher]     pipeline/retry.go:217     done

Elasticsearch is accessible from filebeat container:

curl -XGET http://elasticsearch:9200/_cluster/health
{"cluster_name":"docker-cluster","status":"green","...

Following the troubleshooting guide, I run filebeat with filebeat -e -d "publisher", but no new logs are written.
Runing filebeat as filebeat -e -d "*" becomes unusable, too many logs in console.

Tried to replicate this upgrade issue in a dev environment, but there it worked fine.
This issue happened before, in a previous update, but I didn't had much data so I just deleted all old data for filebeat to work.

Any pointers how to investigate this problem? I don't want to loose data after every update!

With the version update, I've also added a filebeat.modules: section to configuration, that seems to cause this problem.
If I remove the filebeat.modules: section from my configuration, then filebeat works.
Opened github issue https://github.com/elastic/beats/issues/28100