Filebeat stop after sometimes while reading yarn logs

I am using filebeat to ship data of yarn logs of hadoop. I am using is filebeat-7.5.1

`
filebeat.inputs:

  • type: log
    enabled: true

    paths:

    • /root/hadoop/userlogs/**/stderr
    • /root/ingest-logs/ingest.log
    • /root/root-logs/root.log
      `

This is configuration mention in the filebeat.yml

`
2020-04-01T16:12:54.871+0530 INFO kafka/log.go:53 producer/broker/1 maximum request accumulated, waiting for space
2020-04-01T16:12:54.896+0530 INFO kafka/log.go:53 producer/broker/1 maximum request accumulated, waiting for space
2020-04-01T16:12:55.876+0530 INFO kafka/log.go:53 producer/broker/1 maximum request accumulated, waiting for space
2020-04-01T16:13:05.950+0530 INFO beater/filebeat.go:443 Stopping filebeat
2020-04-01T16:13:05.951+0530 INFO crawler/crawler.go:139 Stopping Crawler
2020-04-01T16:13:05.956+0530 INFO crawler/crawler.go:149 Stopping 1 inputs
2020-04-01T16:13:05.962+0530 INFO input/input.go:149 input ticker stopped

2020-04-01T16:13:05.962+0530 INFO input/input.go:167 Stopping Input: 4626669614810496825
2020-04-01T16:13:05.962+0530 INFO log/harvester.go:272 Reader was closed: /scratch/oba/logs/hadoop/userlogs/application_1582785823666_2626/container_1582785823666_2626_01_000002/stderr. Closing. 2020-04-01T16:13:05.962+0530 INFO log/harvester.go:272 Reader was closed: /root/ingest-logs/ingest.log. Closing.
2020-04-01T16:13:05.962+0530 INFO log/harvester.go:272 Reader was closed: /root/ingest-logs/ingest.log. Closing.

2020-04-01T16:13:05.962+0530 INFO log/harvester.go:272 Reader was closed: /root/hadoop/userlogs/application_1582785823666_2626/container_1582785823666_2626_01_000001/stderr. Closing.

2020-04-01T16:13:05.961+0530 INFO cfgfile/reload.go:229 Dynamic config reloader stopped

2020-04-01T16:13:05.971+0530 INFO crawler/crawler.go:165 Crawler stopped
2020-04-01T16:13:05.971+0530 INFO registrar/registrar.go:367 Stopping Registrar 2020-04-01T16:13:05.971+0530 INFO registrar/registrar.go:293 Ending Registrar

`

I have a concern about the rolling logs in the yarn and inactive log file. Does it affect the workingof filbeat. Can you please suggest the changes required in filebeat to keep filebeat alive.

Hey @ucguy4u, welcome to discuss :slight_smile:

These log lines seem to indicate some kind of overload in your kafka server, I guess that you are using the kafka output, is your kafka server having any kind of problem?
How are you reading the messages from kafka?

These log entry seems to indicate that filebeat is being explicitly stopped.

How are you starting filebeat?

Yes logs are being shipping to the Kafka and Logtsash will read from Kafka.
kafka output is configured as:

output.kafka:
  # initial brokers for reading cluster metadata
  hosts: ["<IP>:9092"]
  topic: "bdp"
  timeout: 60
  codec.json:
    pretty: false

I am starting Filebeat as :

./filebeat -e 

And filebeat stops by itself without logging any error?

It is starting with no errors and stop after some time.
I have a concern about inactive log files, our application sometimes stop appending logs to log file(inactive) does it affect Filebeat to stop?
and
also, yarn logs rotate with different container ids. do we need to handle it explicitly in Filebeat?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.