Filebeat PostgreSQL module not parsing message

We are using filebeat 7.11.0 along with the postgresql module. However for some reason, the message attribute seems not to be touched, although some metadata seems to be added by the module.

Our log messages look to be the same format as the one used in the tests but don't result in the same expected output.

The logs get sent from filebeat to our logstash, where we see the same log message arrive as the one sent by filebeat in the filebeat debug log.

In the filebeat logs, it seems like it finds the postgresql.yml configuration file and also picks up new logs added to the postgresql file, but the event seems to be published without a parsed message.

For example, this is the line in the log file (names of tables etc. masked):

2021-03-16 13:35:12.231 UTC [37] LOG:  statement: SELECT "x"."y", "x"."z", "x"."expire_date" FROM "x" WHERE ("x"."expire_date" > '2021-03-16T13:35:12.190372+00:00'::timestamptz AND "x"."y" = 'xyz') LIMIT 21

And this is the result if I turn debug logging on in filebeat:

2021-03-16T15:02:31.269Z        DEBUG   [processors]    processing/processors.go:203    Publish event: {
  "@timestamp": "2021-03-16T15:02:31.268Z",
  "@metadata": {
    "beat": "filebeat",
    "type": "_doc",
    "version": "7.11.0",
    "pipeline": "filebeat-7.11.0-postgresql-log-pipeline"
  },
  "event": {
    "module": "postgresql",
    "dataset": "postgresql.log"
  },
  "fileset": {
    "name": "log"
  },
  "deployment": "staging",
  "message": "2021-03-16 13:35:12.231 UTC [37] LOG:  statement: SELECT \"x\".\"y\", \"x\".\"z\", \"x\".\"expire_date\" FROM \"x\" WHERE (\"x\".\"expire_date\" > '2021-03-16T13:35:12.19
0372+00:00'::timestamptz AND \"x\".\"y\" = 'xyz') LIMIT 21",
  "service": {
    "type": "postgresql"
  }
}
We are using filebeat dockerized, the entry in docker-compose.yml looks like this:
  filebeat:
    restart: unless-stopped
    image: docker.elastic.co/beats/filebeat:7.11.0
    user: root
    environment:
      - DEPLOYMENT_NAME
    volumes:
      - db_logs:/var/log/postgresql:ro
      - ./services/filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
      - ./services/filebeat/modules.d:/usr/share/filebeat/modules.d:ro
      - /var/lib/docker/containers:/var/lib/docker/containers:ro
      - /var/run/docker.sock:/var/run/docker.sock:ro
    command: >
      filebeat -strict.perms=false
      -E output.logstash.hosts=["${LOGSTASH_HOST}"]
      -E output.logstash.ssl.verification_mode=full
Our filebeat.yml:
filebeat.config:
  modules:
    path: ${path.config}/modules.d/*.yml
    reload.enabled: false

filebeat.autodiscover:
  providers:
    - type: docker
      hints.enabled: true
      hints.default_config.enabled: false

processors:
  - drop_fields:
      fields:
        - agent
        - docker
        - ecs
        - host
        - input
        - log
        - stream
        - timestamp
      ignore_missing: true
  - add_fields:
      target: ''
      fields:
        deployment: ${DEPLOYMENT_NAME:n/a}

logging.metrics.enabled: false

monitoring.enabled: false

Just to make sure it's not related to our processors, I tried removing them, but this lead to the same result.

Our modules.d/postgresql.yml:
- module: postgresql
  log:
    enabled: true
    var.paths: ["/var/log/postgresql/postgresql-*.log"]

Since the full filebeat log is too long for this post, I uploaded it here: 2021-03-16T15:02:30.839Z INFO instance/beat.go:660 Home path: [/usr - Pastebin.com

Hi @martinfrancois, welcome to the Elastic community forums!

The postgresql Filebeat module parses the logs using Elasticsearch ingest pipelines. No parsing is done in Filebeat itself. This is why you are seeing the message field as-is and no additional parsed fields coming out of Filebeat or in Logstash.

You have two ways to solve this parsing problem:

  • either send these logs directly from Filebeat to Elasticsearch, in which case the appropriate ingest pipelines will be automatically loaded into Elasticsearch for you and parsing will happen before documents are indexed.
  • or keep sending the logs from Filebeat to Logstash, but then use the elasticsearch output in Logstash to send the logs to Elasticsearch. Also, make sure to manually load the necessary ingest pipelines.

Hope this helps,

Shaunak

Hi @shaunak thanks for the welcome!

Thanks for your answer, using your instruction I was able to set it up and get it working properly!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.