PostgreSQL module got error when parsing AWS RDS PostgreSQL log

I wrote a script that downloads AWS RDS PostgreSQL log files and let the filebeat reads log files using PostgreSQL module, though I found it fails to parse in kibana dashboard.

There was a similar post before at below URL link.

AWS RDS PostgreSQL does not allow the users to change log_line_prefix parameter, so it should conform this parameter.
%t:%r:%u@%d:[%p]:

When I use PostgreSQL module with default settings, I found that json messages at "[filebeat PostgreSQL] Overview" in the Dashboard page.

{
  "_index": "filebeat-6.3.0-2018.06.17",
  "_type": "doc",
  "_id": "8RDYDWQBmFa0p5c-iVek",
  "_version": 1,
  "_score": null,
  "_source": {
    "offset": 738,
    "prospector": {
      "type": "log"
    },
    "source": "/var/log/rds_postgres/test/2018-06-14/postgresql.log.2018-06-14-19",
    "message": "2018-06-14 19:11:26 UTC::@:[3707]:LOG:  checkpoint starting: time",
    "fileset": {
      "module": "postgresql",
      "name": "log"
    },
    "error": {
      "message": "Provided Grok expressions do not match field value: [2018-06-14 19:11:26 UTC::@:[3707]:LOG:  checkpoint starting: time]"
    },
    "input": {
      "type": "log"
    },
    "@timestamp": "2018-06-17T13:04:11.151Z",
    "beat": {
      "hostname": "ip-172-31-24-211",
      "name": "ip-172-31-24-211",
      "version": "6.3.0"
    },
    "host": {
      "name": "ip-172-31-24-211"
    }
  },
  "fields": {
    "@timestamp": [
      "2018-06-17T13:04:11.151Z"
    ]
  },
  "highlight": {
    "fileset.module": [
      "@kibana-highlighted-field@postgresql@/kibana-highlighted-field@"
    ]
  },
  "sort": [
    1529240651151
  ]
}

As this message says, it fails.
Then, I think grok pattern should be like

        "%{LOCALDATETIME:postgresql.log.timestamp} %{WORD:postgresql.log.timezone}:%{IPORHOST:postgresql.log.clientip}:%{USERNAME:postgresql.log.user}@%{HOSTNAME:postgresql.log.database}:\\[%{INT:postgresql.log.process_id}\\]:%{WORD:postgresql.log.level}: %{GREEDYDATA:log_message}",
        "%{LOCALDATETIME:postgresql.log.timestamp} %{WORD:postgresql.log.timezone}:%{IPORHOST:postgresql.log.clientip}:@:\\[%{INT:postgresql.log.process_id}\\]:%{WORD:postgresql.log.level}: %{GREEDYDATA:log_message}",
        "%{LOCALDATETIME:postgresql.log.timestamp} %{WORD:postgresql.log.timezone}:%{IPORHOST:postgresql.log.clientip}:%{USERNAME:postgresql.log.user}@%{HOSTNAME:postgresql.log.database}:\\[%{INT:postgresql.log.process_id}\\]:%{WORD:postgresql.log.level}: duration: %{NUMBER:postgresql.log.duration} ms  statement: %{MULTILINEQUERY:postgresql.log.query}",
        "%{LOCALDATETIME:postgresql.log.timestamp} %{WORD:postgresql.log.timezone}:%{IPORHOST:postgresql.log.clientip}:@:\\[%{INT:postgresql.log.process_id}\\]:%{WORD:postgresql.log.level}: duration: %{NUMBER:postgresql.log.duration} ms  statement: %{MULTILINEQUERY:postgresql.log.query}"

But I don't know where I can override grok pattern settings of PostgreSQL module.
Any one can help me?

Hi,

You can create a new pipeline with your requirement and define the same in filebeat.yml file with pipeline tag as below.

  # The Ingest Node pipeline ID associated with this prospector. If this is set, it
  # overwrites the pipeline option from the Elasticsearch output.
  pipeline:

Please check below link for more understanding.

https://www.elastic.co/guide/en/beats/filebeat/master/configuring-ingest-node.html

Regards,

1 Like

Thanks! I'll carefully read that document and implement it!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.