Squid Module: Splitting multiple logs which are combined as a single message in Kibana

We have the problem that all logs which are send from our Squid server over Filebeat with the Squid module are combined as single messages in Kibana.
The message contains 5 up to 15 entrys which are in fact single log lines from the Squid server.
The squid server is using the default log output format, but we tried different formats with no solution.

Logformat squid:

> %ts.%03tu %6tr %>a %Ss/%03>Hs %<st %rm %ru %[un %Sh/%<a %mt

Example of log, IP changed to 0.0.0.0:

> 1607961779.062 7 0.0.0.0 TCP_MISS/206 5004 GET http://tlu.dl.delivery.mp.microsoft.com/filestreamingservice/files/xxxx? - HIER_DIRECT/0.0.0.0 application/octet-stream

We are NOT using any "multiline" options, just the default filebeat configuration with the Squid module which shipps everything directly to ES.
We also tried to ship the logs from Squid to logstash but didn't find a solution for the multiple entries. And we want to use the Squid module at the end.

Is there a more precise guide for the squid settings than this one:

Similar asked here:

Moreover the filebeat syslog is full of this type of error messages:

> filebeat[38211]: 2020-12-14T16:57:15.980+0100#011ERROR#011[processor.javascript]#011console/console.go:54#011extract_page failed for 'www.google.com:443'

Which [processor.javascript] extract_page process is the cause of this?
This errors are in fact from the Squid module, of course not when using logstash.

Example of the multiple logs per entry:

It smells like a bug and reproduced the other community user's case. Would you mind opening a Github issue for filebeat and copy a couple of faulty log lines?

I have an update, after reading Squid documentation (again and again):

"...being UDP this module may drop packets when the network is under load or congested."

So we changed the Squid log output from UDP to TCP and now there is only 1 instead of 5 up to 20 messages in a single entry. As an example of the load: we have around 15.000 - 20.000 entries per 5 minutes.

But:
The "[processor.javascript] extract_page" error still exists.
It writes an error message per log entry, so around 15-20k error messages in syslog per 5 minutes.
Did you mean this error for the Github issue? Because I couldn't find a single entry regarding this error, no matter which search engine I use.

Hi. Great new that you got it working with TCP.

The extract_page error you are seeing should be a debug message, we accidentally left it as an error. This will be fixed in newer release.

1 Like

Turns out that Squid can send multiple lines of logs in a single UDP packet:

http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-logging-to-UDP-logs-multiple-lines-at-the-same-time-td4685384.html

While Beats' udp input treats each packet as a single message and doesn't split at newlines.

I've created an enhancement request: Filebeat udp input: Support line_delimiter option · Issue #23195 · elastic/beats · GitHub

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.