WARN Invalid response: Data too small to contain a valid length

WARN Invalid response: Data too small to contain a valid length
What could be the cause of this warning as this packetbeat is deployed on a database which is constantly processing

Have you check packetbeat logs for packet loss? Which sniffer type are you using (af_packet seems to be the most stable sniffer to day)?

The analyzer do have to synchronize with the network traffic. This requires, trying to parse and retract on failure. That is, parsing failures or missing requests/responses on packetbeat startup are not uncommon.

On packet loss, the TCP stream can not fully reconstructed. The parser and transaction state is just dropped and needs to resync to the network traffic (just like on startup).

Hi

Tried with af_packet but got the following error
CRIT Exiting: Initializing sniffer failed: Error creating sniffer: setsockopt packet_rx_ring: cannot allocate memory
Exiting: Initializing sniffer failed: Error creating sniffer: setsockopt packet_rx_ring: cannot allocate memory

I have accessed it as sudo but still not giving the access

The af_packet sniffer is based on kernel features. When initializing af_packet, the kernel tries to allocate one continuous block of memory for sharing with the user-process. The error indicates there is not enough un-fragmented physical memory available.

The amount of shared memory (that is packet buffer size) is configurable via buffer_size_mb.

Also add this line to your packetbeat.yml file: packetbeat.interfaces.snaplen: 1514

On long running servers with multiple processes and eventually already using the swap memory, af_packet has problems allocating memory. Starting packetbeat before any other services right after boot might reduce memory-pressur on packetbeat startup.

Applied your recommendations and it seems to have started. Can I pick your brains up on some more things please

  1. I know my DB server is heavily in use with some long running processes but packetbeat seems to be dropping or not reporting the events correctly. Is there a mysql specific configuration which allows for listening of long running process and also how can I be sure that packetbeat is doing the things correctly as it is supposed to do
  2. Are there any settings in the buffer or the wait time configured that can help me ?
  • packetbeat has a quite low default transaction timeout. The timeout is required to clean state in case of missing packets/traffic
  • the mysql module currently does not support prepared statements. Message/Transaction types the parse does not support, can not be reported (see ticket)
  • You can check the packetbeat logs for error message (parsing failure, unknown message types). Plus, packetbeat publishes some metrics every 30s to the log file.
  • Note: packetbeat requiring to keep state will drop transactions if the output buffer is full (queue_size setting).

Hey
Thanks for the info
So if there are still some things that MySQL plugin needs implementation, is my assumption correct that flow type would be my safest bet if I have to monitor the data flow in and out on port 3306 ?

currently flows is basically packet and byte count on TCP connections. If all you want is knowing about active connections and data being pushed forth-and-back, than it's good to have.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.