Packetbeat postgresql

Hallo,

I experience some funny errors when dealing with packetbeat and the postgresql module.
First of all, I can see Data being send to Elasticsearch and I can view them in Kibana.
Secondly, I think it is more a bug than a configuration error. Postgresql has about 250.000 queries per second, but I only get a few 100 lines per second send into my elasticsearch. Even when looking for an average over 15 Minutes, there are millions of lines missing.

How can I tackle this? Anybody an Idea what I can turn off / turn on in packetbeat to see why it is failing? I can find the following lines in the /var/log/packetbeat/packetbeat

2018-04-11T08:28:20.572+0200 WARN pgsql/parse.go:501 Pgsql parser expected data message, but received command of type 110
2018-04-11T08:28:30.193+0200 ERROR pgsql/parse.go:531 Pgsql invalid column_length=4294967295, buffer_length=13, i=8
2018-04-11T08:28:30.262+0200 ERROR pgsql/parse.go:531 Pgsql invalid column_length=4294967295, buffer_length=28, i=8
2018-04-11T08:28:32.742+0200 ERROR pgsql/parse.go:531 Pgsql invalid column_length=4294967295, buffer_length=138, i=3
2018-04-11T08:28:33.424+0200 WARN pgsql/parse.go:501 Pgsql parser expected data message, but received command of type 110
2018-04-11T08:28:33.425+0200 WARN pgsql/parse.go:501 Pgsql parser expected data message, but received command of type 110
2018-04-11T08:28:34.192+0200 ERROR pgsql/parse.go:531 Pgsql invalid column_length=4294967295, buffer_length=150, i=3
2018-04-11T08:28:36.205+0200 WARN pgsql/parse.go:501 Pgsql parser expected data message, but received command of type 110
2018-04-11T08:28:39.486+0200 ERROR pgsql/parse.go:531 Pgsql invalid column_length=4294967295, buffer_length=22, i=4
2018-04-11T08:28:39.488+0200 ERROR pgsql/parse.go:531 Pgsql invalid column_length=4294967295, buffer_length=60, i=4
2018-04-11T08:28:39.491+0200 ERROR pgsql/parse.go:531 Pgsql invalid column_length=4294967295, buffer_length=60, i=4
2018-04-11T08:28:39.492+0200 ERROR pgsql/parse.go:531 Pgsql invalid column_length=4294967295, buffer_length=60, i=4

There's probably some packet loss affecting Packetbeat's ability to reconstruct the postgres data. There's a metric that is logged at 30s intervals that indicates how many packets have been dropped. This happens when there's more data coming in that can be processed.

What are your settings in Packetbeat's config? If you are Linux try using af_packet and tuning some settings.

packetbeat.interfaces.device: eth0            # Don't use 'any'. Listen specifically on one interface.
packetbeat.interfaces.snaplen: 1514
packetbeat.interfaces.type: af_packet
packetbeat.interfaces.buffer_size_mb: 100

Disable flows if you aren't using them.

packetbeat.flows.enabled: false 

And disable other protocols that you are not using. This will limit the amount of packets that Packetbeat is receiving such that it can use its resources for postgres data only.

Thanks for the information with the buffer_size_mb, that really help it.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.