6.5.4 Filebeat w enabled postgresql module doesn't create proper index - no postgresql.*.* fields

#1

Hi guys,
running ELK stack 6.5.4 on rhat 7.3 and postgresql 11.2, though had the same issue with pgsql 9.6.
Postgresql has been configured to write into syslog and csvlog, so filebeat with postgresql module enabled should be able to parse /var/log/postgresql log and ingest it into ES index of respective structure and thus I would be able to visualize it in kibana, right?
I went through examples in Kibana->Home/Add Data/PostgreSQL logs following to all instructions I could find in documentation on elastic.co how to configure filebeat.yml , modules, etc ...
Getting Started

Step

1

Download and install Filebeat

First time using Filebeat? See the Getting Started Guide.

Copy snippet

curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.5.4-darwin-x86_64.tar.gz
tar xzvf filebeat-6.5.4-darwin-x86_64.tar.gz
cd filebeat-6.5.4-darwin-x86_64/

Step

2

Edit the configuration

Modify filebeat.yml to set the connection information:

Copy snippet

output.elasticsearch:
  hosts: ["<es_url>"]
  username: "elastic"
  password: "<password>"
setup.kibana:
  host: "<kibana_url>"

Where <password> is the password of the elastic user, <es_url> is the URL of Elasticsearch, and <kibana_url> is the URL of Kibana.

Step

3

Enable and configure the postgresql module

From the installation directory, run:

Copy snippet

./filebeat modules enable postgresql

Modify the settings in the modules.d/postgresql.yml file.

Step

4

Start Filebeat

The setup command loads the Kibana dashboards. If the dashboards are already set up, omit this command.

Copy snippet

./filebeat setup
./filebeat -e

Incomplete Step

Module status

Check that data is received from the Filebeat postgresql module

Check data

When all steps are complete, you're ready to explore your data.

PostgreSQL logs dashboard

it does not work. filebeat does scan postgresql log and sends it to ES, but the problem is the
index that is being created does not contain any postgresql.. fields that were supposed to be facilitated by postgresql module.
the whole postgresql syslog record is being placed into "message" field as there would not be any postgresql module enabled.
Thus prebuilt "PostgreSQL logs dashboard" provided with kibana as an example cannot display any data. I was wondering how it was created and tested in a first place.

What am I doing wrong?
has anyone dealt with that? has anyone implemented similar setup and has it been working?

I would really appreciate example of filebeat.yml that produces proper ES index with postgresql.. fields, any ideas and suggestions, pointers and references to written/publish somewhere else materials.

Thanks in advance.
Yuri

(Shaunak Kashyap) #2

Could you post your filebeat.yml and modules.d/postgresql.yml files here, please? Remember to mask any sensitive information.

Also, are you seeing any error or warning messages in the Filebeat logs?

#3

Sure. here it goes. Thank you for looking into this.

filebeat.yml:

#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: true
#  enabled: false

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
   - /u01/pgsql/11/mqlog/physIO_Stats.csv
    #- /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  ### Multiline options

  # Multiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[
#  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}'

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false
#  multiline.negate: true

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
#  multiline.match: after


#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: /u01/ELK/filebeat/modules.d/*.yml
  #path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 3
  #index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"
  host: "kibanahost:5605"

  #================================ Outputs =====================================

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
#  hosts: ["host1.com:7200","host2.com:7200"]
#  pipeline: "posgresqlcsvlog"
  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]
  hosts: ["logstachhost:5044"]

  #================================ Procesors 
postgres@/u01/elk/filebeat$ cat modules.d/postgresql.yml
- module: postgresql
  # All logs
  log:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
#    var.paths: /var/log/postgresql*
    var.paths:
#####################################

logstash.yml

input {
  beats {
    port => 5044
  }
}

filter {
        csv {
            columns => [
                        "qry_timestamp", "schemaname",
                        "heap_read", "heap_hit", "idx_read", "idx_hit", "toast_read",
                        "toast_hit", "tidx_read", "tidx_hit"
            ]

           separator => ","
           remove_field => ["message"]

        }
        date {
#               target => "@timestamp"
                match => ["qry_timestamp", "ISO8601"]
        }
        mutate {
                convert => {
                        "heap_read" => "integer"
                        "heap_hit" => "integer"
                        "idx_read" => "integer"
                        "idx_hit" => "integer"
                        "toast_read" => "integer"
                        "toast_hit" => "integer"
                        "tidx_read" => "integer"
                        "tidx_hit" => "integer"
                }
        }
}
output {
        elasticsearch {
                hosts => [ "eshost:7200" ]
                action => "index"
                index => ["statio_index-%{+YYYY.MM.dd}"]
#               document_type => ["csv"]
#               document_id => "%{id}"
        }

        # output dots while processing
        #stdout { codec => rubydebug }
        stdout { codec => "dots" }
        # if date parse failure, dump it to screen for review
        if "_dataparsefailure" in [tags] {
                stdout {codec => "rubydebug"}
        }
}
(Shaunak Kashyap) #4

What is the path to your postgresql logs folder on your system? That should be set as the value of var.paths in your modules.d/postgresql.yml file.

Additionally, please restore the filebeat.inputs section of your filebeat.yml to its default contents.

#5

I am sorry, I sent you the wrong yml file. I am having multiple issues with filebeat. please ignore logstash configuration file as well. here is what I should have sent you:
#=========================== Filebeat inputs =============================

filebeat.inputs:

Each - is an input. Most options can be set at the input level, so

you can use different inputs for various configurations.

Below are the input specific configurations.

  • type: log

    Change to true to enable this input configuration.

    enabled: true

enabled: false

Paths that should be crawled and fetched. Glob based paths.

paths:
- /var/log/postgresql
#- c:\programdata\elasticsearch\logs*

Exclude lines. A list of regular expressions to match. It drops the lines that are

matching any regular expression from the list.

#exclude_lines: ['^DBG']

Include lines. A list of regular expressions to match. It exports the lines that are

matching any regular expression from the list.

#include_lines: ['^ERR', '^WARN']

Exclude files. A list of regular expressions to match. Filebeat drops the files that

are matching any regular expression from the list. By default, no files are dropped.

exclude_files: ['.gz$']

Optional additional fields. These fields can be freely picked

to add additional information to the crawled log files for filtering

#fields:

level: debug

review: 1

Multiline options

Multiline can be used for log messages spanning multiple lines. This is common

for Java Stack Traces or C-Line Continuation

The regexp Pattern that has to be matched. The example pattern matches all lines starting with [

#multiline.pattern: ^[
multiline.pattern: '^[a-zA-Z]{3} [0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2}'

Defines if the pattern set under pattern should be negated or not. Default is false.

#multiline.negate: false
multiline.negate: true

Match can be set to "after" or "before". It is used to define if lines should be append to a pattern

that was (not) matched before or after or as long as a pattern is not matched based on negate.

Note: After is the equivalent to previous and before is the equivalent to to next in Logstash

multiline.match: after

#============================= Filebeat modules ===============================

filebeat.config.modules:

Glob pattern for configuration loading

path: /u01/elk/filebeat/modules.d/.yml
#path: ${path.config}/modules.d/
.yml

Set to true to enable config reloading

reload.enabled: true
#reload.enabled: false

Period on which files under path should be checked for changes

#reload.period: 10s
reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
index.number_of_shards: 3
index.codec: best_compression
#_source.enabled: false

#================================ General =====================================

The name of the shipper that publishes the network data. It can be used to group

all the transactions sent by a single shipper in the web interface.

#name:

The tags of the shipper are included in their own field with each

transaction published.

#tags: ["service-X", "web-tier"]

Optional fields that you can specify to add additional information to the

output.

#fields:

env: staging

#============================== Dashboards =====================================

These settings control loading the sample dashboards to the Kibana index. Loading

the dashboards is disabled by default and can be enabled either by setting the

options here, or by using the -setup CLI flag or the setup command.

#setup.dashboards.enabled: false

The URL from where to download the dashboards archive. By default this URL

has a value which is computed based on the Beat name and version. For released

versions, this URL points to the dashboard archive on the artifacts.elastic.co

website.

#setup.dashboards.url:

#============================== Kibana =====================================

Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.

This requires a Kibana endpoint configuration.

setup.kibana:

Kibana Host

Scheme and port can be left out and will be set to the default (http and 5601)

In case you specify and additional path, the scheme is required: http://localhost:5601/path

IPv6 addresses should always be defined as: https://[2001:db8::1]:5601

#host: "localhost:5601"
host: "kibanahost:5605"

Kibana Space ID

ID of the Kibana Space into which the dashboards should be loaded. By default,

the Default Space will be used.

#space.id:

#============================= Elastic Cloud ==================================

These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).

The cloud.id setting overwrites the output.elasticsearch.hosts and

setup.kibana.host options.

You can find the cloud.id in the Elastic Cloud web UI.

#cloud.id:

The cloud.auth setting overwrites the output.elasticsearch.username and

output.elasticsearch.password settings. The format is <user>:<pass>.

#cloud.auth:

#================================ Outputs =====================================

Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:

Array of hosts to connect to.

hosts: ["host1.com:7200","host2.com:7200"]

pipeline: "posgresqlcsvlog"

Optional protocol and basic auth credentials.

#protocol: "https"
#username: "elastic"
#password: "changeme"

#----------------------------- Logstash output --------------------------------
#output.logstash:

The Logstash hosts

#hosts: ["localhost:5044"]

hosts: ["lghost:5044"]

Optional SSL. By default is off.

List of root certificates for HTTPS server verifications

#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

Certificate for SSL client authentication

#ssl.certificate: "/etc/pki/client/cert.pem"

Client Certificate Key

#ssl.key: "/etc/pki/client/cert.key"

#================================ Procesors =====================================

(system) closed #6

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.