Filebeat making too many log files and refuses to send JSON strings

  • I'm using filebeats to directly ship my logs to ES cluster.
  • The log file has one Json per line.
  • I'm also asking filebeat to generate it's own logs in debug mode by log rotation. Here are logging settings:
logging.level: debug
logging.to_files: true
logging.files:
  path: /home/ubuntu
  name: filebeat.log
  keepfiles: 10
  rotateeverybytes: 10485760
  permissions: 0644
  • Here are problems I'm facing:
  • I'm facing following errors:
  1. Too many log files are getting generated almost instantaneously with suffix .1, .2, .3, .4 etc. They all appear at same time, almost within microsecond of each other. All have same size. 8.1K in my case.

2.) filebeat isn't working: Here's log output:

2018-02-14T08:47:34.871Z	INFO	[monitoring]	log/log.go:97	Starting metrics logging every 30s
2018-02-14T08:47:34.871Z	INFO	instance/beat.go:301	filebeat start running.
2018-02-14T08:47:34.871Z	DEBUG	[registrar]	registrar/registrar.go:88	Registry file set to: /var/lib/filebeat/registry
2018-02-14T08:47:34.871Z	INFO	registrar/registrar.go:108	Loading registrar data from /var/lib/filebeat/registry
2018-02-14T08:47:34.871Z	INFO	instance/beat.go:308	filebeat stopped.
2018-02-14T08:47:34.872Z	INFO	[monitoring]	log/log.go:132	Total non-zero metrics	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":0,"time":0},"total":{"ticks":0,"time":8,"value":0},"user":{"ticks":0,"time":8}},"info":{"ephemeral_id":"a61c5f21-48ee-42b1-befb-6c5769468c3d","uptime":{"ms":6}},"memstats":{"gc_next":4473924,"memory_alloc":2859656,"memory_total":2859656,"rss":20254720}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"output":{"type":"elasticsearch"},"pipeline":{"clients":0,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"cpu":{"cores":2},"load":{"1":0.17,"15":0.1,"5":0.18,"norm":{"1":0.085,"15":0.05,"5":0.09}}}}}}
2018-02-14T08:47:34.872Z	INFO	[monitoring]	log/log.go:133	Uptime: 7.429404ms
2018-02-14T08:47:34.872Z	INFO	[monitoring]	log/log.go:110	Stopping metrics logging.
2018-02-14T08:47:34.874Z	ERROR	instance/beat.go:667	Exiting: Could not start registrar: Error loading state: Error decoding states: json: cannot unmarshal object into Go value of type []file.State
  • Here's my yaml file:
... <omitted> 

- type: log

  # Change to true to enable this prospector configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /home/ubuntu/xxxxxx/info.log

  #json:
  #  keys_under_root: true
  #  overwrite_keys: true
  #  add_error_key: true

#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  reload.period: 5s

#==================== Elasticsearch template setting ==========================

setup.template.name: "content_portal_info"
setup.template.pattern: "content_portal_info"

setup.template.settings:
  index.number_of_shards: 5

#  index.codec: best_compression
#  _source.enabled: false


#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
name: "content_portal"

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging



#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"
  host: "xxx.yyyy.com:5601"
  username: "elastic"
  password: "xxxxxx"

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["xxx.yyy:9200"]
  username: "elastic"
  password: "xxxxx"
  index: "content-portal-service_%{+yyyy.MM.dd}"


#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
logging.level: debug
logging.to_files: true
logging.files:
  path: /home/ubuntu
  name: filebeat.log
  keepfiles: 10
  rotateeverybytes: 10485760
  permissions: 0644

  • I have been trying to get this to work for almost 2 days now without any luck. I've tried multiple combinations. ES is reachable from the machine I'm trying to run filebeat on.

It seems like the problem is that the registry file is corrupt or invalid. Can you share the contents of the file at /var/lib/filebeat/registry, then delete it, and try restarting Filebeat.

1 Like

This topic was automatically closed after 21 days. New replies are no longer allowed.