Filebeat eats all space on disks when Elastic is down

Hello All,

Our Elastic instance is down sometimes due to high load and human mistakes (we are in the middle of setting everything fine). When this happens, disks are getting full by Filebeat logs that notifies us that it can't contact Elastic, then it gets a new event about this and reports again, and it does it so frequently that we may run out of 300 GB logs partition in a few hours.

Example (fake hostnames & IPs):

May 11 10:23:56 xyz-serverA filebeat[28635]: 2022-05-11T10:23:56.042+0300#011WARN#011[elasticsearch]#011elasticsearch/client.go:407#011Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xc096f6fc755a2936, ext:7430630000880622, loc:(*time.Location)(0x5b230e0)}, Meta:null, Fields:{"agent":{"ephemeral_id":"d20244e8-efd3-4690-979c-cb978c1930dc","hostname":"xyz-serverA.mydomain","id":"af3164e6-4dd2-4d29-be80-e4e8e3c895f3","name":"xyz-serverA","type":"filebeat","version":"7.8.0"},"ecs":{"version":"1.5.0"},"host":{"architecture":"x86_64","containerized":false,"hostname":"xyz-serverA.mydomain","id":"95c148e330fb4485bfca70dcb0265b9e","ip":["1.2.3.4","fe80::32fd:65ff:fe3a:5979","172.17.0.1","3.4.60.22","3.4.0.1","3.4.0.3","3.4.26.85","3.4.7.112","3.4.92.166","fe80::ecee:eeff:feee:eeee","fe80::ecee:eeff:feee:eeee","fe80::ecee:eeff:feee:eeee","fe80::ecee:eeff:feee:eeee","fe80::ecee:eeff:feee:eeee","169.254.25.10"],"mac":["54:13:10:8d:ef:e3","54:13:10:8d:ef:e4","fa:16:3e:0c:8f:48","fa:16:3e:0c:8f:48","30:fd:65:3a:59:7b","30:fd:65:3a:59:7c","fa:16:3e:0c:8f:48","02:42:20:81:9a:cf","26:82:27:0a:60:fe","ee:ee:ee:ee:ee:ee","ee:ee:ee:ee:ee:ee","ee:ee:ee:ee:ee:ee","ee:ee:ee:ee:ee:ee","ee:ee:ee:ee:ee:ee","0a:4c:78:0e:3a:56"],"name":"xyz-serverA","os":{"codename":"bionic","family":"debian","kernel":"4.15.0-45-generic","name":"Ubuntu","platform":"ubuntu","version":"18.04.2 LTS (Bionic Beaver)"}},"input":{"type":"log"},"log":{"file":{"path":"/var/log/syslog"},"offset":14435005387},"message":"May 11 10:21:05 xyz-serverA filebeat[28635]: 2022-05-11T10:21:05.504+0300#011WARN#011[elasticsearch]#011elasticsearch/client.go:407#011Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xc096f6d1707bd6ae, ext:7430457919201617, loc:(*time.Location)(0x5b230e0)}, Meta:null, Fields:{\"agent\":{\"ephemeral_id\":\"d20244e8-efd3-4690-979c-cb978c1930dc\",\"hostname\":\"xyz-serverA.mydomain\",\"id\":\"af3164e6-4dd2-4d29-be80-e4e8e3c895f3\",\"name\":\"xyz-serverA\",\"type\":\"filebeat\",\"version\":\"7.8.0\"},\"ecs\":{\"version\":\"1.5.0\"},\"host\":{\"architecture\":\"x86_64\",\"containerized\":false,\"hostname\":\"xyz-serverA.mydomain\",\"id\":\"95c148e330fb4485bfca70dcb0265b9e\",\"ip\":[\"1.2.3.4\",\"fe80::32fd:65ff:fe3a:5979\",\"172.17.0.1\",\"3.4.60.22\",\"3.4.0.1\",\"3.4.0.3\",\"3.4.26.85\",\"3.4.7.112\",\"3.4.92.166\",\"fe80::ecee:eeff:feee:eeee\",\"fe80::ecee:eeff:feee:eeee\",\"fe80::ecee:eeff:feee:eeee\",\"fe80::ecee:eeff:feee:eeee\",\"fe80::ecee:eeff:feee:eeee\",\"169.254.25.10\"],\"mac\":[\"54:13:10:8d:ef:e3\",\"54:13:10:8d:ef:e4\",\"fa:16:3e:0c:8f:48\",\"fa:16:3e:0c:8f:48\",\"30:fd:65:3a:59:7b\",\"30:fd:65:3a:59:7c\",\"fa:16:3e:0c:8f:48\",\"02:42:20:81:9a:cf\",\"26:82:27:0a:60:fe\",\"ee:ee:ee:ee:ee:ee\",\"ee:ee:ee:ee:ee:ee\",\"ee:ee:ee:ee:ee:ee\",\"ee:ee:ee:ee:ee:ee\",\"ee:ee:ee:ee:ee:ee\",\"0a:4c:78:0e:3a:56\"],\"name\":\"xyz-serverA\",\"os\":{\"codename\":\"bionic\",\"family\":\"debian\",\"kernel\":\"4.15.0-45-generic\",\"name\":\"Ubuntu\",\"platform\":\"ubuntu\",\"version\":\"18.04.2 LTS (Bionic Beaver)\"}},\"input\":{\"type\":\"log\"},\"log\":{\"file\":{\"path\":\"/var/log/syslog\"},\"offset\":14239931266},\"message\":\"May 11 10:18:21 xyz-serverA filebeat[28635]: 2022-05-11T10:18:21.375+0300#011WARN#011[elasticsearch]#011elasticsearch/client.go:407#011Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xc096f6a7efc5f83f, ext:7430291907282667, loc:(*time.Location)(0x5b230e0)}, Meta:null, Fields:{\\\"agent\\\":{\\\"ephemeral_id\\\":\\\"d20244e8-efd3-4690-979c-cb978c1930dc\\\",\\\"hostname\\\":\\\"xyz-serverA.mydomain\\\",\\\"id\\\":\\\"af3164e6-4dd2-4d29-be80-e4e8e3c895f3\\\",\\\"name\\\":\\\"xyz-serverA\\\",\\\"type\\\":\\\"filebeat\\\",\\\"version\\\":\\\"7.8.0\\\"},\\\"ecs\\\":{\\\"version\\\":\\\"1.5.0\\\"},\\\"host\\\":{\\\"architecture\\\":\\\"x86_64\\\",\\\"containerized\\\":false,\\\"hostname\\\":\\\"xyz-serverA.mydomain\\\",\\\"id\\\":\\\"95c148e330fb4485bfca70dcb0265b9e\\\",\\\"ip\\\":[\\\"1.2.3.4\\\",\\\"fe80::32fd:65ff:fe3a:5979\\\",\\\"172.17.0.1\\\",\\\"3.4.60.22\\\",\\\"3.4.0.1\\\",\\\"3.4.0.3\\\",\\\"3.4.26.85\\\",\\\"3.4.7.112\\\",\\\"3.4.92.166\\\",\\\"fe80::ecee:eeff:feee:eeee\\\",\\\"fe80::ecee:eeff:feee:eeee\\\",\\\"fe80::ecee:eeff:feee:eeee\\\",\\\"fe80::ecee:eeff:feee:eeee\\\",\\\"fe80::ecee:eeff:feee:eeee\\\",\\\"169.254.25.10\\\"],\\\"mac\\\":[\\\"54:13:10:8d:ef:e3\\\",\\\"54:13:10:8d:ef:e4\\\",\\\"fa:16:3e:0c:8f:48\\\",\\\"fa:16:3e:0c:8f:48\\\",\\\"30:fd:65:3a:59:7b\\\",\\\"30:fd:65:3a:59:7c\\\",\\\"fa:16:3e:0c:8f:48\\\",\\\"02:42:20:81:9a:cf\\\",\\\"26:82:27:0a:60:fe\\\",\\\"ee:ee:ee:ee:ee:ee\\\",\\\"ee:ee:ee:ee:ee:ee\\\",\\\"ee:ee:ee:ee:ee:ee\\\",\\\"ee:ee:ee:ee:ee:ee\\\",\\\"ee:ee:ee:ee:ee:ee\\\",\\\"0a:4c:78:0e:3a:56\\\"],\\\"name\\\":\\\"xyz-serverA\\\",\\\"os\\\":{\\\"codename\\\":\\\"bionic\\\",\\\"family\\\":\\\"debian\\\",\\\"kernel\\\":\\\"4.15.0-45-generic\\\",\\\"name\\\":\\\"Ubuntu\\\",\\\"platform\\\":\\\"ubuntu\\\",\\\"version\\\":\\\"18.04.2 LTS (Bionic Beaver)\\\"}},\\\"input\\\":{\\\"type\\\":\\\"log\\\"},\\\"log\\\":{\\\"file\\\":{\\\"path\\\":\\\"/var/log/syslog\\\"},\\\"offset\\\":14046097167},\\\"message\\\":\\\"May 11 10:15:32 xyz-serverA filebeat[28635]: 2022-05-11T10:15:32.377+0300#011WARN#011[elasticsearch]#011elasticsearch/client.go:407#011Cannot index event ... (a lot more)

I fully agree that we should stabilize Elastic, but meantime I'm thinking how to stop Filebeat of being so "proactive".

To me, we have two options: a) do not report Filebeat WARN messages - too bad b) log Filebeat events into files that are not harvested by Filebeat - still get tons of messages, but many many times less that we have now.

I've googled many topics here, but can't find anything suitable, so would like to raise this separately.

My questions is how to stop Filebeat from being so "talkative" when Elastic is down and don't consume hundreds of GBs on disks when this happens.

Please advise.

Thanks.

hi,

I suggest configuring the logging output as below and modifying the keep files to a 2 or 3, the default is 7.

logging.level: info
logging.to_files: true
logging.files:
  path: /var/log/filebeat
  name: filebeat
  keepfiles: 7
  permissions: 0640


This helps. Thanks.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.