Filebeat with CLOGs on FreeBSD / PFsense

Continuing the discussion from Filebeat on FreeBSD / PFsense:

Has there been any solution to dealing with the CLOG format? I'm running PFSENSE 2.4.4 which sits on FreeBSD 11.2 and I'm running into the same issue where logs will get shipped once filebeat turns on then it hangs until I kill it and restart it.

# filebeat version
filebeat version 6.4.2 (amd64), libbeat 6.4.2 [unknown built unknown]
filebeat -c /usr/local/etc/filebeat/filebeat.yml -e -d "*"
2019-02-11T09:09:37.901-0500  INFO  instance/beat.go:544  Home path: [/root] Config path: [/root] Data path: [/root/data] Logs path: [/root/logs]
2019-02-11T09:09:37.901-0500  DEBUG [beat]  instance/beat.go:571  Beat metadata path: /root/data/meta.json
2019-02-11T09:09:37.902-0500  INFO  instance/beat.go:551  Beat UUID: d7be8204-4c79-4edd-805c-f3724c5626af
2019-02-11T09:09:37.902-0500  DEBUG [seccomp] seccomp/seccomp.go:88 Syscall filtering is only supported on Linux
2019-02-11T09:09:37.903-0500  INFO  [beat]  instance/beat.go:768  Beat info {"system_info": {"beat": {"path": {"config": "/root", "data": "/root/data", "home": "/root", "logs": "/root/logs"}, "type": "filebeat", "uuid": "d7be8204-4c79-4edd-805c-f3724c5626af"}}}
2019-02-11T09:09:37.903-0500  INFO  [beat]  instance/beat.go:777  Build info  {"system_info": {"build": {"commit": "unknown", "libbeat": "6.4.2", "time": "1754-08-30T22:43:41.128Z", "version": "6.4.2"}}}
2019-02-11T09:09:37.903-0500  INFO  [beat]  instance/beat.go:780  Go runtime info {"system_info": {"go": {"os":"freebsd","arch":"amd64","max_procs":1,"version":"go1.11.5"}}}
2019-02-11T09:09:37.904-0500  INFO  instance/beat.go:273  Setup Beat: filebeat; Version: 6.4.2
...
2019-02-11T09:09:37.907-0500  INFO  pipeline/module.go:98 Beat name: pfsense
2019-02-11T09:09:37.910-0500  INFO  instance/beat.go:367  filebeat start running.
2019-02-11T09:09:37.911-0500  DEBUG [registrar] registrar/registrar.go:114  Registry file set to: /root/data/registry
2019-02-11T09:09:37.911-0500  INFO  registrar/registrar.go:134  Loading registrar data from /root/data/registry
2019-02-11T09:09:37.911-0500  INFO  registrar/registrar.go:141  States Loaded from registrar: 1
2019-02-11T09:09:37.912-0500  WARN  beater/filebeat.go:371  Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2019-02-11T09:09:37.912-0500  INFO  crawler/crawler.go:72 Loading Inputs: 1
...
2019-02-11T09:09:37.915-0500  INFO  [monitoring]  log/log.go:114  Starting metrics logging every 30s
...
2019-02-11T09:09:37.921-0500  INFO  log/input.go:138  Configured paths: [/var/log/system.log]
2019-02-11T09:09:37.921-0500  INFO  input/input.go:114  Starting input of type: log; ID: 5147500196639063160 
2019-02-11T09:09:37.921-0500  DEBUG [cfgfile] cfgfile/reload.go:109 Checking module configs from: /usr/local/etc/filebeat/modules.d/*.yml
2019-02-11T09:09:37.922-0500  DEBUG [cfgfile] cfgfile/reload.go:123 Number of module configs found: 0
2019-02-11T09:09:37.922-0500  INFO  crawler/crawler.go:106  Loading and starting Inputs completed. Enabled inputs: 1
...
2019-02-11T09:09:37.926-0500  INFO  log/harvester.go:251  Harvester started for file: /var/log/system.log
2019-02-11T09:09:37.927-0500  INFO  cfgfile/reload.go:141 Config reloader started
2019-02-11T09:09:37.928-0500  DEBUG [acker] beater/acker.go:64  stateful ack  {"count": 1}
2019-02-11T09:09:37.931-0500  DEBUG [publish] pipeline/processor.go:308 Publish event: {
  "@timestamp": "2019-02-11T14:09:37.931Z",
<log data ...>
}
2019-02-11T09:09:37.933-0500  DEBUG [cfgfile] cfgfile/reload.go:167 Scan for new config files
...
2019-02-11T09:09:37.945-0500  INFO  cfgfile/reload.go:196 Loading of config files completed.
2019-02-11T09:09:37.948-0500  DEBUG [registrar] registrar/registrar.go:393  Registry file updated. 1 states written.
2019-02-11T09:09:38.947-0500  INFO  pipeline/output.go:95 Connecting to backoff(async(tcp://10.4.4.15:5044))
2019-02-11T09:09:38.948-0500  DEBUG [logstash]  logstash/async.go:111 connect
2019-02-11T09:09:38.953-0500  INFO  pipeline/output.go:105  Connection to backoff(async(tcp://10.4.4.15:5044)) established
2019-02-11T09:09:38.961-0500  DEBUG [logstash]  logstash/async.go:159 4 events out of 4 events sent to logstash host 10.4.4.15:5044. Continue sending
...
^C2019-02-11T09:09:56.603-0500  DEBUG [service] service/service.go:51 Received sigterm/sigint, stopping
2019-02-11T09:09:56.605-0500  INFO  beater/filebeat.go:437  Stopping filebeat
2019-02-11T09:09:56.605-0500  INFO  crawler/crawler.go:139  Stopping Crawler
2019-02-11T09:09:56.605-0500  INFO  crawler/crawler.go:149  Stopping 1 inputs
2019-02-11T09:09:56.605-0500  INFO  cfgfile/reload.go:199 Dynamic config reloader stopped
2019-02-11T09:09:56.605-0500  INFO  input/input.go:149  input ticker stopped
2019-02-11T09:09:56.605-0500  INFO  input/input.go:167  Stopping Input: 5147500196639063160
2019-02-11T09:09:56.606-0500  INFO  log/harvester.go:272  Reader was closed: /var/log/system.log. Closing.
...
2019-02-11T09:09:56.607-0500  INFO  crawler/crawler.go:165  Crawler stopped
2019-02-11T09:09:56.607-0500  INFO  registrar/registrar.go:356  Stopping Registrar
2019-02-11T09:09:56.607-0500  INFO  registrar/registrar.go:282  Ending Registrar
...
2019-02-11T09:09:56.622-0500  ERROR instance/metrics.go:77  Error while getting memory usage: error retrieving process stats: cannot find matching process for pid=28986
2019-02-11T09:09:56.622-0500  DEBUG [processes] process/process.go:460  Skip process pid=28986: error getting process state for pid=28986: no such process
2019-02-11T09:09:56.622-0500  ERROR instance/metrics.go:112 Error retrieving CPU percentages: error retrieving process stats: cannot find matching process for pid=28986
2019-02-11T09:09:56.622-0500  INFO  [monitoring]  log/log.go:149  Total non-zero metrics  {"monitoring": {"metrics": {"beat":{"info":{"ephemeral_id":"dcbbf468-7d93-4deb-a0c2-07e63fba1a73","uptime":{"ms":18739}},"memstats":{"gc_next":6484192,"memory_alloc":3265984,"memory_total":7510568}},"filebeat":{"events":{"added":6,"done":6},"harvester":{"closed":1,"open_files":0,"running":0,"started":1}},"libbeat":{"config":{"module":{"running":0},"reloads":1},"output":{"events":{"acked":4,"batches":1,"total":4},"read":{"bytes":6},"type":"logstash","write":{"bytes":512}},"pipeline":{"clients":0,"events":{"active":0,"filtered":2,"published":4,"retry":4,"total":6},"queue":{"acked":4}}},"registrar":{"states":{"current":1,"update":6},"writes":{"success":4,"total":4}},"system":{"cpu":{"cores":1},"load":{"1":0,"15":0,"5":0,"norm":{"1":0,"15":0,"5":0}}}}}}
2019-02-11T09:09:56.623-0500  INFO  [monitoring]  log/log.go:150  Uptime: 18.74156981s

Cut out a bunch of debug messages to I could post.

No direct solution, but over at netgate/pfsense finaly the realize clog is a pain in the *** https://redmine.pfsense.org/issues/8350

[ Todo #8350: Remove clog in favor of standard syslogd or syslogd alternative with rotation via newsyslog or logrotate

Redmine

redmine.pfsense.org

I use the now filebeat syslog input and have the pfsense log syslog to localhost. Works perfect

Ha. At least the acknowledged it. Hopefully It comes out in 2.4.5 and doesn't get pushed, again. You're idea sounds better than mine. I was just going to do something like

clog -f /var/log/system.log >> /var/log/system_plaintext.log &

Is syslog only managing system logs for you then? What about your filter logs (or any others you might care about).

This is janky at best but what I chose to go with. I do this for any CLOG-formatted log that I want to send to logstash.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.