Log/log.go:145 Non-zero metrics in the last 30s

Hello,

I am new to ELK stack and configured my filebeat in such a way that it directly send logs to elasticsearch itself. Problem here is when i ran the filebeat in debug mode using the below command, i am getting the log message as 'Non-zero metrics in last 30 sec'. Looks like filebeat is not picking up the data.

filebeat -e -c filebeat.yml -d publish

Here is the clear view of my log message:

2020-05-09T17:03:12.425-0400 INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":30,"time":{"ms":31}},"total":{"ticks":110,"time":{"ms":117},"value":110},"user":{"ticks":80,"time":{"ms":86}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":7},"info":{"ephemeral_id":"17a670cb-5362-4302-8f8a-4aeeb0968463","uptime":{"ms":30086}},"memstats":{"gc_next":9179696,"memory_alloc":4627840,"memory_total":11617440,"rss":32698368},"runtime":{"goroutines":22}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0},"reloads":1},"output":{"type":"elasticsearch"},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"cpu":{"cores":1},"load":{"1":0,"15":0.05,"5":0.01,"norm":{"1":0,"15":0.05,"5":0.01}}}}}}

Any help is much appreciated...!!. Thanks in Advance.

Regards,
Sai Charan

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    #- /var/log/*.log
    - /var/log/snort/*.log

    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']
 # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  ### Multiline options

  # Multiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after


#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 1
setup.template.name: "filebeat"
setup.template.pattern: "filebeat-*"
  #index.codec: best_compression
   #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


#============================== Dashboards =====================================


#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

#============================= Elastic Cloud ==================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["192.168.0.7:9200"]
  index: "filebeat-%{[agent.version]}-%{+yyyy.MM.dd}"
  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Processors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

#============================== X-Pack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

Hey @sai_charan, and welcome :slight_smile:

Yes, from these metrics it can be seen that filebeat is not harvesting any file ("filebeat":{"harvester":{"open_files":0,"running":0}}).

Could you share your configuration and the list of files that you log files you want to collect?

Oh sorry, I wrote too fast, I see you already shared your configuration in a second comment. I have edited it so it is correctly formatted.

I see you are trying to collect the logs in /var/log/snort/*.log. Are you running filebeat as root? Is it possible that filebeat doesn't have permissions to read these files?

Hey @jsoriano. Thanks for editing the configuration in correct format. Yes, you are right. I am running filebeat as root and trying to collect logs from /var/log/snort/*.log.

Please let me know how can I overcome this problem and harvest the logs I needed. Looking forward to hear from you.

Ok, if filebeat is running as root it should be able to read these files. Could you share the Filebeat startup logs?

Here is the filebeat start up logs:

[root@localhost ~]# filebeat -e
2020-05-11T17:45:24.513-0400 INFO instance/beat.go:610 Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2020-05-11T17:45:24.513-0400 INFO instance/beat.go:618 Beat ID: 25003e46-8365-4bdf-94e1-1b5121254ff0
2020-05-11T17:45:24.521-0400 INFO [seccomp] seccomp/seccomp.go:124 Syscall filter successfully installed
2020-05-11T17:45:24.521-0400 INFO [beat] instance/beat.go:941 Beat info {"system_info": {"beat": {"path": {"config": "/etc/filebeat", "data": "/var/lib/filebeat", "home": "/usr/share/filebeat", "logs": "/var/log/filebeat"}, "type": "filebeat", "uuid": "25003e46-8365-4bdf-94e1-1b5121254ff0"}}}
2020-05-11T17:45:24.521-0400 INFO [beat] instance/beat.go:950 Build info {"system_info": {"build": {"commit": "a9c141434cd6b25d7a74a9c770be6b70643dc767", "libbeat": "7.5.2", "time": "2020-01-15T11:13:22.000Z", "version": "7.5.2"}}}
2020-05-11T17:45:24.521-0400 INFO [beat] instance/beat.go:953 Go runtime info {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":1,"version":"go1.12.12"}}}
2020-05-11T17:45:24.522-0400 INFO [beat] instance/beat.go:957 Host info {"system_info": {"host": {"architecture":"x86_64","boot_time":"2020-05-11T17:38:19-04:00","containerized":false,"name":"localhost.localdomain","ip":["127.0.0.1/8","::1/128","192.168.22.138/24","fe80::9fe3:d8e6:2641:2576/64"],"kernel_version":"3.10.0-1062.18.1.el7.x86_64","mac":["00:0c:29:f9:f6:8d"],"os":{"family":"redhat","platform":"centos","name":"CentOS Linux","version":"7 (Core)","major":7,"minor":7,"patch":1908,"codename":"Core"},"timezone":"EDT","timezone_offset_sec":-14400,"id":"835a35f481844f0db81225a737747f02"}}}
2020-05-11T17:45:24.523-0400 INFO [beat] instance/beat.go:986 Process info {"system_info": {"process": {"capabilities": {"inheritable":null,"permitted":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend"],"effective":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend"],"bounding":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend"],"ambient":null}, "cwd": "/root", "exe": "/usr/share/filebeat/bin/filebeat", "name": "filebeat", "pid": 1380, "ppid": 1350, "seccomp": {"mode":"filter","no_new_privs":true}, "start_time": "2020-05-11T17:45:24.250-0400"}}}
2020-05-11T17:45:24.523-0400 INFO instance/beat.go:297 Setup Beat: filebeat; Version: 7.5.2
2020-05-11T17:45:24.523-0400 INFO [index-management] idxmgmt/std.go:182 Set output.elasticsearch.index to 'filebeat-7.5.2' as ILM is enabled.
2020-05-11T17:45:24.523-0400 INFO elasticsearch/client.go:171 Elasticsearch url: http://192.168.0.7:9200
2020-05-11T17:45:24.529-0400 INFO [publisher] pipeline/module.go:97 Beat name: localhost.localdomain
2020-05-11T17:45:24.530-0400 INFO instance/beat.go:429 filebeat start running.
2020-05-11T17:45:24.531-0400 INFO registrar/registrar.go:145 Loading registrar data from /var/lib/filebeat/registry/filebeat/data.json
2020-05-11T17:45:24.531-0400 INFO registrar/registrar.go:152 States Loaded from registrar: 3
2020-05-11T17:45:24.531-0400 INFO crawler/crawler.go:72 Loading Inputs: 1
2020-05-11T17:45:24.532-0400 INFO log/input.go:152 Configured paths: [/var/log/snort/*.log]
2020-05-11T17:45:24.532-0400 INFO input/input.go:114 Starting input of type: log; ID: 1567643895462822589
2020-05-11T17:45:24.532-0400 INFO crawler/crawler.go:106 Loading and starting Inputs completed. Enabled inputs: 1
2020-05-11T17:45:24.534-0400 INFO [monitoring] log/log.go:118 Starting metrics logging every 30s
2020-05-11T17:45:24.534-0400 INFO cfgfile/reload.go:171 Config reloader started
2020-05-11T17:45:24.535-0400 INFO cfgfile/reload.go:226 Loading of config files completed.
2020-05-11T17:45:24.537-0400 INFO add_cloud_metadata/add_cloud_metadata.go:89 add_cloud_metadata: hosting provider type not detected.
2020-05-11T17:45:54.543-0400 INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":110,"time":{"ms":119}},"total":{"ticks":150,"time":{"ms":162},"value":150},"user":{"ticks":40,"time":{"ms":43}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":7},"info":{"ephemeral_id":"7e397a06-8620-4e16-99aa-ac748ceebe8b","uptime":{"ms":30098}},"memstats":{"gc_next":8788480,"memory_alloc":4530760,"memory_total":11696992,"rss":34742272},"runtime":{"goroutines":22}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0},"reloads":1},"output":{"type":"elasticsearch"},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"cpu":{"cores":1},"load":{"1":0.04,"15":0.07,"5":0.09,"norm":{"1":0.04,"15":0.07,"5":0.09}}}}}}

Yep, it starts the input, but it doesn't seem to find any file, this is weird. Could you run in the same shell the command ls -l /var/log/snort/*.log?

Is /var/log/snort a normal, local directory, or is it some virtual or remote filesystem?


sai_charan

1m

Hi,

Yes. '/var/log/snort' is a normal local directory. Below is the command result:

[root@localhost snort]# ls -l /var/log/snort
total 12
-rw-r--r--. 1 root root 0 Apr 26 02:51 alert
-rw-------. 1 snort snort 744 Apr 30 04:39 snort.log.1588235919
-rw-------. 1 snort snort 564 Apr 30 06:27 snort.log.1588242472
-rw-------. 1 snort snort 744 May 9 08:07 snort.log.1589026020

I tried your above command 'ls -l /var/log/snort/*.log' as well but, only till snort is a directory and rest of the path are logs in that. Below is the result:

[root@localhost snort]# ls -l /var/log/snort/*.log
ls: cannot access /var/log/snort/*.log: No such file or directory

Looking forward to hear back. Thanks...!!!

From what I see there there is no file that matches the pattern /var/log/snort/*.log. If you want to collect the logs from snort.log.1588235919, snort.log.1588242472 and so on, you should set a pattern that matches these files, like the following one:

  paths:
    #- /var/log/*.log
    - /var/log/snort/snort.log*

This should return something if there is some file named *.log in this directory, try with ls -l /var/log/snort/snort.log*.

Yeah....Got it and changed the path as mentioned. Now it works. Thanks to you :slightly_smiling_face:

2020-05-13T17:54:47.219-0400 INFO [monitoring] log/log.go:153 Total non-zero metrics {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":230,"time":{"ms":232}},"total":{"ticks":350,"time":{"ms":358},"value":350},"user":{"ticks":120,"time":{"ms":126}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":7},"info":{"ephemeral_id":"751c620f-fb0a-46bd-928b-156c0924e6cc","uptime":{"ms":282629}},"memstats":{"gc_next":8690080,"memory_alloc":5482680,"memory_total":14914968,"rss":34697216},"runtime":{"goroutines":12}},"filebeat":{"events":{"added":3,"done":3},"harvester":{"closed":3,"open_files":0,"running":0,"started":3}},"libbeat":{"config":{"module":{"running":0},"reloads":1},"output":{"type":"elasticsearch"},"pipeline":{"clients":0,"events":{"active":0,"filtered":3,"total":3}}},"registrar":{"states":{"current":6,"update":3},"writes":{"success":4,"total":4}},"system":{"cpu":{"cores":1},"load":{"1":0,"15":0.03,"5":0.01,"norm":{"1":0,"15":0.03,"5":0.01}}}}}}

But, I don't see these logs in Kibana. Could you please help me figure out this one too??

hi, I faced the same problem with you, I am running filebeat in docker and try to collect docker logs.
Here is my filebeat configuration file:

filebeat.inputs:
- type: container
  enabled: true
  paths: 
    - /var/lib/docker/containers/*/*.log

setup.kibana:
  enabled: true
  host: "kibana:5601"
  username: "elastic"
  password: "changeme"

output.elasticsearch:
  enabled: true
  hosts: ["https://elasticsearch:9200"]
  username: "elastic"
  password: "changeme"

My filebeat startup log looks like below:

2020-05-14T15:55:53.914+0800	INFO	instance/beat.go:622	Home path: [/usr/share/filebeat] Config path: [/usr/share/filebeat] Data path: [/usr/share/filebeat/data] Logs path: [/usr/share/filebeat/logs]


2020-05-14T15:55:55.016+0800	INFO	instance/beat.go:630	Beat ID: 65f74ace-eec7-4c8a-8462-b35ea453ec98


2020-05-14T15:55:55.016+0800	INFO	[seccomp]	seccomp/seccomp.go:124	Syscall filter successfully installed


2020-05-14T15:55:55.016+0800	INFO	[beat]	instance/beat.go:958	Beat info	{"system_info": {"beat": {"path": {"config": "/usr/share/filebeat", "data": "/usr/share/filebeat/data", "home": "/usr/share/filebeat", "logs": "/usr/share/filebeat/logs"}, "type": "filebeat", "uuid": "65f74ace-eec7-4c8a-8462-b35ea453ec98"}}}


2020-05-14T15:55:55.016+0800	INFO	[beat]	instance/beat.go:967	Build info	{"system_info": {"build": {"commit": "d57bcf8684602e15000d65b75afcd110e2b12b59", "libbeat": "7.6.2", "time": "2020-03-26T05:23:38.000Z", "version": "7.6.2"}}}


2020-05-14T15:55:55.016+0800	INFO	[beat]	instance/beat.go:970	Go runtime info	{"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":2,"version":"go1.13.8"}}}


2020-05-14T15:55:55.017+0800	INFO	[beat]	instance/beat.go:974	Host info	{"system_info": {"host": {"architecture":"x86_64","boot_time":"2020-04-03T22:04:38+08:00","containerized":true,"name":"779931187e5f","ip":["127.0.0.1/8","172.1.1.235/24","172.18.0.8/16"],"kernel_version":"4.15.0-91-generic","mac":["02:42:ac:01:01:eb","02:42:ac:12:00:08"],"os":{"family":"redhat","platform":"centos","name":"CentOS Linux","version":"7 (Core)","major":7,"minor":7,"patch":1908,"codename":"Core"},"timezone":"CST","timezone_offset_sec":28800}}}


2020-05-14T15:55:55.018+0800	INFO	[beat]	instance/beat.go:1003	Process info	{"system_info": {"process": {"capabilities": {"inheritable":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"permitted":null,"effective":null,"bounding":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"ambient":null}, "cwd": "/usr/share/filebeat", "exe": "/usr/share/filebeat/filebeat", "name": "filebeat", "pid": 1, "ppid": 0, "seccomp": {"mode":"filter","no_new_privs":true}, "start_time": "2020-05-14T15:55:51.780+0800"}}}


2020-05-14T15:55:55.018+0800	INFO	instance/beat.go:298	Setup Beat: filebeat; Version: 7.6.2


2020-05-14T15:55:55.018+0800	INFO	[index-management]	idxmgmt/std.go:182	Set output.elasticsearch.index to 'filebeat-7.6.2' as ILM is enabled.


2020-05-14T15:55:55.018+0800	INFO	elasticsearch/client.go:174	Elasticsearch url: https://elasticsearch:9200


2020-05-14T15:55:55.018+0800	INFO	[publisher]	pipeline/module.go:110	Beat name: 779931187e5f


2020-05-14T15:55:55.019+0800	INFO	[monitoring]	log/log.go:118	Starting metrics logging every 30s


2020-05-14T15:55:55.019+0800	INFO	instance/beat.go:439	filebeat start running.


2020-05-14T15:55:55.019+0800	INFO	registrar/migrate.go:104	No registry home found. Create: /usr/share/filebeat/data/registry/filebeat


2020-05-14T15:55:55.019+0800	INFO	registrar/migrate.go:112	Initialize registry meta file


2020-05-14T15:55:55.390+0800	INFO	registrar/registrar.go:108	No registry file found under: /usr/share/filebeat/data/registry/filebeat/data.json. Creating a new registry file.


2020-05-14T15:55:55.810+0800	INFO	registrar/registrar.go:145	Loading registrar data from /usr/share/filebeat/data/registry/filebeat/data.json


2020-05-14T15:55:55.810+0800	INFO	registrar/registrar.go:152	States Loaded from registrar: 0


2020-05-14T15:55:55.810+0800	INFO	crawler/crawler.go:72	Loading Inputs: 1


2020-05-14T15:55:55.811+0800	INFO	log/input.go:152	Configured paths: [/var/lib/docker/containers/*/*.log]


2020-05-14T15:55:55.811+0800	INFO	input/input.go:114	Starting input of type: log; ID: 17886728773476438255 


2020-05-14T15:55:55.811+0800	INFO	crawler/crawler.go:106	Loading and starting Inputs completed. Enabled inputs: 1


2020-05-14T15:56:25.020+0800	INFO	[monitoring]	log/log.go:145	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":20,"time":{"ms":22}},"total":{"ticks":160,"time":{"ms":169},"value":0},"user":{"ticks":140,"time":{"ms":147}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":6},"info":{"ephemeral_id":"9cf80c71-7335-41f9-85e0-1f6790a04578","uptime":{"ms":31123}},"memstats":{"gc_next":12152688,"memory_alloc":7983088,"memory_total":13144288,"rss":48726016},"runtime":{"goroutines":20}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"output":{"type":"elasticsearch"},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":0},"writes":{"success":1,"total":1}},"system":{"cpu":{"cores":2},"load":{"1":2.4,"15":0.53,"5":1.01,"norm":{"1":1.2,"15":0.265,"5":0.505}}}}}}


2020-05-14T15:56:55.019+0800	INFO	[monitoring]	log/log.go:145	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":20,"time":{"ms":2}},"total":{"ticks":160,"time":{"ms":3},"value":160},"user":{"ticks":140,"time":{"ms":1}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":6},"info":{"ephemeral_id":"9cf80c71-7335-41f9-85e0-1f6790a04578","uptime":{"ms":61122}},"memstats":{"gc_next":12152688,"memory_alloc":8290184,"memory_total":13451384},"runtime":{"goroutines":20}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":2.36,"15":0.59,"5":1.14,"norm":{"1":1.18,"15":0.295,"5":0.57}}}}}}

I tried the command below and it shows a lot log files.

ls -l /var/lib/docker/containers/*/*.log

Looking forward for help.

Hey @yshzhao, welcome to discuss :slight_smile:

Your issue seems to be a different one , could you please open a new topic here?

Thanks!

Hey @jsoriano,

Did you get a chance to look into my reply. Could you please help me in pushing logs to Kibana. Now filebeat is picking up but I don't see logs in Kibana.

Looking forward to hear from you.

Oops, sorry, I didn't see your last comment.

Where are you missing your logs? Do you see anything in the discover view in Kibana? or in the logs UI?

If they don't appear in any of these views:

  • Double-check that the index pattern you are using matches your indexes (in principle it should be filebeat-*).
  • Try to search on filebeat indexes for any result, you can do it running this query in the developer console: GET filebeat-*/_search. If you get any result, check if the timestamps are correct.

I am not sure where I am missing logs. In Kibana discover view, I can see only one index as ilm-history-*. Attached the screenshot for your reference. No other indexes are there.

But, when I checked on the Index management option there I can see two indices. One as 'ilm-history-*' and the other one as 'filebeat-7.5.2'. Below is the screenshot for reference:

I really don't get where I am missing. Let me know @jsoriano if you need more information.

You should have a filebeat-* index pattern, this is created by filebeat setup when installing dashboards. You can also create it manually, as filebeat-*. When creating it manually it is important to set the ID also to filebeat-* in the advanced options.

In any case the filebeat index you have has a docs count of 0, so it seems that still no logs are being collected. Is snort still writing logs to these files?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.