PostgresSQL Metricbeat module, possible issue with database connections

Hi, I started using PostgresSQL Metricbeat module on Kubernetes.
I installed the latest version of Metricbeat via helm (elastic/metricbeat chart).
It runs 3 metricbeat pods which show no errors and I can see the metrics in the generated kibana
dashboard.
I am running my postgres database as a service using a third party tool called KubeDB.
This is my metricbeat configuration:

output.elasticsearch:
  hosts: ["elasticsearch-master:9200"]
setup.kibana.host: "kibana-kibana:5601"
setup.dashboards.enabled: true
setup.template.enabled: true
metricbeat.autodiscover:
  providers:
    - type: kubernetes
      templates:
        - condition:
            equals:
              kubernetes.service.name: "postgres"
        - condition:
            equals:
              kubernetes.namespace: dev
          config:
            - module: postgresql
              enabled: true
              metricsets:
                - activity
                - bgwriter
                - database
                - statement
              period: 30s
              hosts: ["postgres://postgres:5432?sslmode=disable"]
              username: xxxx
              password: xxxx

After metricbeat installs, when i perform 'SELECT * FROM pg_stat_activity' query, I can see that there are about a 60-70 database connections from metricbeat in idle state. The number of connections also varies, sometimes it goes down but sometimes it has a propensity to increase until the database starts throwing 'FATAL: sorry, too many clients already' error. I tried to increase max_connections to 700 in postgres.conf and after some time metricbeat creates enough connections to again cause the error.

This is the shortened list of connections (they repeat like this):

I was just wondering is this normal behavior or I am doing something wrong?
If it is normal, can someone please explain the rationale behind the design. I would assume that it's better to reuse or at least timely close database connections.

Thank you in advance