Hi
I'm using ELk-6.3.1 I've only recently started using Elk, and I output some of the cluster logs to logstash through filebeat, but I can only deliver the latest ones, and before I deploy ELk, will some logs also want to collect analysis filebeat to be implemented?
Please advise. Thank you.
Hi ss,
Are the logs written to a file? If so, you can process them whenever you want as long as they are not rotated.
For other forms of input such as TCP/UDP and so on, you will need to build either an HA in Logstash or some other redundancy mechanism.
Hi
first of all, thank you very much for replying to me.
I know very well that ELK can collect the generated logs in real time. The problem is that some of the logs are generated by some programs running before the deployment of ELK, and the requirement is that you can collect these logs as well. Do you have any good ways?
I am not sure I understood correctly what your requirements are. I assume you want to collect logs from files which were written previously by a program, before Filebeat is started. Please, correct me if it's not the case.
Filebeat is capable of reading from files which are not written to anymore. Filebeat starts to read it, when finished it closes the FD. It tries to read it every time scan_frequency
has elasped, but if the file hasn't change since Filebeat last encountered it, FB does not read it again. Is it possible Filebeat already read that file before, so it does not pick up it again?
Could you please share your config formatted using </>
? Also, please attach the debug logs of Filebeat (./filebeat -e -d "*"
).
Hello
First of all, thank you very much for being able to reply to my questions, as you said, I need to use filebeat to read the history log, that is, not to write the file; I also see in the official documentation that the ignore_older parameter is described as having access to last week's file and the most recent file, but I configured it not to take effect, and I don't understand the use of this parameter! My understanding is that I start filebeat. by setting this parameter, I can get the log a week ago. Is that right? Or do you have any other solutions?
Please advise. Thank you.
The following is the configuration file for filebeat:
filebet:
- type: log
tail_files: false
ignore_older: 48h
enabled: true
paths:
- /var/log/messages*
fields:
document_type: message_log
#============================= Filebeat modules ===============================
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 3
setup.kibana:
#-------------------------- Logstash output ------------------------------
output.logstash:
hosts: ["172.16.0.5:5044"]
This is the filebeat debug log:
[root@elk_test1 /etc/filebeat]
@ 10:28:27 @# /usr/share/filebeat/bin/filebeat -e -d ""
2018-08-16T10:29:50.568+0800 INFO instance/beat.go:492 Home path: [/usr/share/filebeat/bin] Config path: [/usr/share/filebeat/bin] Data path: [/usr/share/filebeat/bin/data] Logs path: [/usr/share/filebeat/bin/logs]
2018-08-16T10:29:50.568+0800 DEBUG [beat] instance/beat.go:519 Beat metadata path: /usr/share/filebeat/bin/data/meta.json
2018-08-16T10:29:50.568+0800 INFO instance/beat.go:499 Beat UUID: 068e423e-9cf8-4ed5-a1c3-567feca264fb
2018-08-16T10:29:50.568+0800 INFO [beat] instance/beat.go:716 Beat info {"system_info": {"beat": {"path": {"config": "/usr/share/filebeat/bin", "data": "/usr/share/filebeat/bin/data", "home": "/usr/share/filebeat/bin", "logs": "/usr/share/filebeat/bin/logs"}, "type": "filebeat", "uuid": "068e423e-9cf8-4ed5-a1c3-567feca264fb"}}}
2018-08-16T10:29:50.568+0800 INFO [beat] instance/beat.go:725 Build info {"system_info": {"build": {"commit": "ed42bb85e72ae58cc09748dc1825159713e0ffd4", "libbeat": "6.3.1", "time": "2018-06-29T21:09:35.000Z", "version": "6.3.1"}}}
2018-08-16T10:29:50.568+0800 INFO [beat] instance/beat.go:728 Go runtime info {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":1,"version":"go1.9.4"}}}
2018-08-16T10:29:50.569+0800 INFO [beat] instance/beat.go:732 Host info {"system_info": {"host": {"architecture":"x86_64","boot_time":"2018-08-15T09:28:14+08:00","containerized":true,"hostname":"elk_test1","ips":["127.0.0.1/8","::1/128","172.16.0.2/32","fe80::e6c5:4ad9:9c0c:5a0f/64"],"kernel_version":"3.10.0-693.el7.x86_64","mac_addresses":["00:0c:29:05:46:20"],"os":{"family":"redhat","platform":"centos","name":"CentOS Linux","version":"7 (Core)","major":7,"minor":4,"patch":1708,"codename":"Core"},"timezone":"CST","timezone_offset_sec":28800,"id":"2e58c05f4aa24e779395b4f65903af39"}}}
2018-08-16T10:29:50.569+0800 INFO [beat] instance/beat.go:761 Process info {"system_info": {"process": {"capabilities": {"inheritable":null,"permitted":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend"],"effective":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend"],"bounding":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend"],"ambient":null}, "cwd": "/etc/filebeat", "exe": "/usr/share/filebeat/bin/filebeat", "name": "filebeat", "pid": 4479, "ppid": 1168, "seccomp": {"mode":"disabled"}, "start_time": "2018-08-16T10:29:49.810+0800"}}}
2018-08-16T10:29:50.569+0800 INFO instance/beat.go:225 Setup Beat: filebeat; Version: 6.3.1
2018-08-16T10:29:50.569+0800 DEBUG [beat] instance/beat.go:242 Initializing output plugins
2018-08-16T10:29:50.569+0800 DEBUG [processors] processors/processor.go:49 Processors:
2018-08-16T10:29:50.570+0800 DEBUG [publish] pipeline/consumer.go:120 start pipeline event consumer
2018-08-16T10:29:50.570+0800 INFO pipeline/module.go:81 Beat name: elk_test1
2018-08-16T10:29:50.570+0800 ERROR fileset/modules.go:101 Not loading modules. Module directory not found: /usr/share/filebeat/bin/module
2018-08-16T10:29:50.570+0800 INFO instance/beat.go:315 filebeat start running.
2018-08-16T10:29:50.570+0800 DEBUG [registrar] registrar/registrar.go:96 Registry file set to: /usr/share/filebeat/bin/data/registry
2018-08-16T10:29:50.570+0800 INFO registrar/registrar.go:116 Loading registrar data from /usr/share/filebeat/bin/data/registry
2018-08-16T10:29:50.570+0800 INFO registrar/registrar.go:127 States Loaded from registrar: 0
2018-08-16T10:29:50.570+0800 WARN beater/filebeat.go:354 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2018-08-16T10:29:50.570+0800 INFO crawler/crawler.go:48 Loading Inputs: 0
2018-08-16T10:29:50.570+0800 DEBUG [cfgfile] cfgfile/reload.go:90 Checking module configs from: /usr/share/filebeat/bin/modules.d/.yml
2018-08-16T10:29:50.570+0800 DEBUG [cfgfile] cfgfile/reload.go:104 Number of module configs found: 0
2018-08-16T10:29:50.570+0800 INFO crawler/crawler.go:82 Loading and starting Inputs completed. Enabled inputs: 0
2018-08-16T10:29:50.570+0800 INFO [monitoring] log/log.go:97 Starting metrics logging every 30s
2018-08-16T10:29:50.570+0800 DEBUG [registrar] registrar/registrar.go:158 Starting Registrar
2018-08-16T10:29:50.570+0800 INFO cfgfile/reload.go:122 Config reloader started
2018-08-16T10:29:50.570+0800 DEBUG [cfgfile] cfgfile/reload.go:146 Scan for new config files
2018-08-16T10:29:50.570+0800 DEBUG [cfgfile] cfgfile/reload.go:165 Number of module configs found: 0
2018-08-16T10:29:50.570+0800 INFO cfgfile/reload.go:214 Loading of config files completed.
2018-08-16T10:30:20.574+0800 INFO [monitoring] log/log.go:124 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":0,"time":{"ms":2}},"total":{"ticks":0,"time":{"ms":12},"value":0},"user":{"ticks":0,"time":{"ms":10}}},"info":{"ephemeral_id":"72af64bf-4c44-44de-9b8c-20172c425a20","uptime":{"ms":30006}},"memstats":{"gc_next":4473924,"memory_alloc":3087656,"memory_total":3087656,"rss":11964416}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0},"reloads":1},"output":{"type":"logstash"},"pipeline":{"clients":0,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"cpu":{"cores":1},"load":{"1":0,"15":0.05,"5":0.01,"norm":{"1":0,"15":0.05,"5":0.01}}}}}}
^C2018-08-16T10:30:23.679+0800 DEBUG [service] service/service.go:34 Received sigterm/sigint, stopping
2018-08-16T10:30:23.679+0800 INFO beater/filebeat.go:420 Stopping filebeat
2018-08-16T10:30:23.679+0800 INFO crawler/crawler.go:109 Stopping Crawler
2018-08-16T10:30:23.679+0800 INFO crawler/crawler.go:119 Stopping 0 inputs
2018-08-16T10:30:23.679+0800 INFO cfgfile/reload.go:217 Dynamic config reloader stopped
2018-08-16T10:30:23.679+0800 INFO crawler/crawler.go:135 Crawler stopped
2018-08-16T10:30:23.679+0800 INFO registrar/registrar.go:247 Stopping Registrar
2018-08-16T10:30:23.679+0800 INFO registrar/registrar.go:173 Ending Registrar
2018-08-16T10:30:23.679+0800 DEBUG [registrar] registrar/registrar.go:291 Write registry file: /usr/share/filebeat/bin/data/registry
As you said, you can read history logs that are not being written, and please provide a solution that would be highly appreciated for your help
Your configuration seems incorrect.
filebeat.inputs:
- type: log
tail_files: false
ignore_older: 48h
enabled: true
paths:
- /var/log/messages*
fields:
document_type: message_log
#============================= Filebeat modules ===============================
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 3
setup.kibana:
#-------------------------- Logstash output ------------------------------
output.logstash:
hosts: ["172.16.0.5:5044"]
fields
part of your configuration is not straightforward. Do you want to add the field you specified to every event produced by Filebeat or only to events coming from the log input you added?
Hi
Thanks for pointing out my drawbacks, I'm still in the test phase, so some features aren't configured in too much detail, just to read new logs if you want to read logs that are not written. And it was generated before the filebeat deployment, can you point out how to configure it?
Please advise. Thank you.
I am sorry, but I might not understood what you asked previously. To clear things up, do you want to skip old files? And only read files which are written after Filebeat is started?
Sorry, Maybe I'm not talking about it in detail. Now let's say a scenario like this: running the application service on some servers in a production environment will generate a lot of logs, now deploy filebeat.To make it easier to view the log. How does the requirement collect the logs generated before the deployment of filebeat?
Thank you for any advice and help
Ok, I think I got it now. The contradiction which confused me is you have ignore_older: 48h
. This option tells Filebeat to ignore every input file which is older than 2 days. If that's not what you want, delete this from the config.
If Filebeat still does not pick up your files, it means Filebeat has already encountered those files and successfully forwarded them to the configured output. If you are sure the logs haven't arrived, you could delete data/registry
. However, it could lead to log duplication if the files you cannot find right now, are not really missing. So be careful.
OK, thank you for your answers to my questions and suggestions, I will take them!
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.