I have finished the the implementation of ELK + filebeat, and on my laptop works well.
ELK are installed into linux server. And filebeat should monitor files from an windows server 2008.
When I run FileBeat 1.0.1 ( i have tryed also with 1.0.0) from command prompt it works well ( at least it sends messages to Logstash in real time) even if when I had tons of logs (around 50 MB) the system become visible slowly. But as a service (install works properly) it starts to insert logs into ELK but after few hundreds of logs, it freeze and in Registry file is not updated the line of filelog, even if the time of last modify of Registry is changed when I close the service. And for this reason when I start filebeat again it starts to read same logs, and in elasticsearch i have these data as duplicate.
I have check also the log file of filebeat and I believe this can be the reason:
2016-01-12T16:24:05+02:00 ERR Error sending/writing event: write /dev/stdout: The handle is invalid.
What you you think about this issue?
Thank you!
Ovidiu
P.S. Durring I wrote this email filebeat in command prompt detect and insert over 7000 lines, and seems to be ok.
Update: I have counted the inserted logs after service freeze and i got these numbers: 1024, 431, 479, 479, 1024. this number catch my attention "1024" and i have checked the filebeat.yml and I found this settings:
General filebeat configuration options
Event count spool threshold - forces network flush if exceeded
Best way to debug would be if you could set the log level to debug in the config file and post an excerpt here. spool_size should not be a reason for a freeze.
Hi Rufin,
I have made your sugestion, and i have news. Also I have played with spool_size value and it seems to have a direct impact, i'm not sure if it is the reason of freeze.
With debug activated i have made 3 simulations:
spool_size = 1024 (default). at this test after 1024 messages inserted into elasticsearch it freeze with these logs:
2016-01-13T11:28:57+02:00 DBG Publish: {...}
2016-01-13T11:28:57+02:00 DBG Publish: {...}
2016-01-13T11:28:57+02:00 DBG Publish: {...}
2016-01-13T11:28:57+02:00 DBG Publish: {...}
2016-01-13T11:28:57+02:00 DBG Forward preprocessed events
2016-01-13T11:28:57+02:00 DBG output worker: publish 1024 events
2016-01-13T11:28:57+02:00 DBG output worker: publish 1024 events
2016-01-13T11:28:57+02:00 DBG Try to publish %!s(int=1024) events to logstash with window size %!s(int=10)
2016-01-13T11:28:57+02:00 ERR Error sending/writing event: write /dev/stdout: The handle is invalid.
2016-01-13T11:28:57+02:00 DBG %!s(int=10) events out of %!s(int=1024) events sent to logstash. Continue sending ...
2016-01-13T11:28:57+02:00 DBG Try to publish %!s(int=1014) events to logstash with window size %!s(int=15)
2016-01-13T11:28:58+02:00 DBG %!s(int=15) events out of %!s(int=1014) events sent to logstash. Continue sending ...
2016-01-13T11:28:58+02:00 DBG Try to publish %!s(int=999) events to logstash with window size %!s(int=22)
2016-01-13T11:28:58+02:00 DBG %!s(int=22) events out of %!s(int=999) events sent to logstash. Continue sending ...
2016-01-13T11:28:58+02:00 DBG Try to publish %!s(int=977) events to logstash with window size %!s(int=33)
2016-01-13T11:28:58+02:00 DBG %!s(int=33) events out of %!s(int=977) events sent to logstash. Continue sending ...
2016-01-13T11:28:58+02:00 DBG Try to publish %!s(int=944) events to logstash with window size %!s(int=49)
2016-01-13T11:28:58+02:00 DBG %!s(int=49) events out of %!s(int=944) events sent to logstash. Continue sending ...
2016-01-13T11:28:58+02:00 DBG Try to publish %!s(int=895) events to logstash with window size %!s(int=73)
2016-01-13T11:28:58+02:00 DBG %!s(int=73) events out of %!s(int=895) events sent to logstash. Continue sending ...
2016-01-13T11:28:58+02:00 DBG Try to publish %!s(int=822) events to logstash with window size %!s(int=109)
2016-01-13T11:28:59+02:00 DBG %!s(int=109) events out of %!s(int=822) events sent to logstash. Continue sending ...
2016-01-13T11:28:59+02:00 DBG Try to publish %!s(int=713) events to logstash with window size %!s(int=163)
2016-01-13T11:28:59+02:00 DBG %!s(int=163) events out of %!s(int=713) events sent to logstash. Continue sending ...
2016-01-13T11:28:59+02:00 DBG Try to publish %!s(int=550) events to logstash with window size %!s(int=244)
2016-01-13T11:28:59+02:00 DBG %!s(int=244) events out of %!s(int=550) events sent to logstash. Continue sending ...
2016-01-13T11:28:59+02:00 DBG Try to publish %!s(int=306) events to logstash with window size %!s(int=366)
2016-01-13T11:29:00+02:00 DBG %!s(int=306) events out of %!s(int=306) events sent to logstash. Continue sending ...
2016-01-13T11:29:07+02:00 DBG Start next scan
2016-01-13T11:29:07+02:00 DBG scan path E:\Logs\Pl*
2nd. spool_size = 2048. at this test after 2048 messages inserted into elasticsearch it freeze with similar logs the the only difference is this:
2016-01-13T11:35:21+02:00 DBG Forward preprocessed events
2016-01-13T11:35:21+02:00 DBG output worker: publish 2048 events
2016-01-13T11:35:21+02:00 DBG output worker: publish 2048 events
2016-01-13T11:35:21+02:00 DBG Try to publish %!s(int=2048) events to logstash with window size %!s(int=10)
2016-01-13T11:35:21+02:00 ERR Error sending/writing event: write /dev/stdout: The handle is invalid.
2016-01-13T11:35:21+02:00 DBG %!s(int=10) events out of %!s(int=2048) events sent to logstash. Continue sending ...
2016-01-13T11:35:21+02:00 DBG Try to publish %!s(int=2038) events to logstash with window size %!s(int=15)
2016-01-13T11:35:21+02:00 DBG %!s(int=15) events out of %!s(int=2038) events sent to logstash. Continue sending ...
3rd. spool_size = 100. at this test after 100 messages inserted into elasticsearch it freeze with similar logs.
I have tried to attach also the full file logs of simulations, but seems like i cannot.
Can you share your filebeat configuration? Despite the /dev/stdout error (which is weird), I don't see any signs in the logs of filebeat beeing frozen.
Hi,
Usually I don't have access to logstash server ( i'm noob in linux), but if you want I can ask for logs. But in order to made this test i use the same logstash instance with 2 windows servers 2008 which sends data to logstash. On one server filebeat run from command prompt (and it works well) in other I trying to make it work as service, both have the same filebeat configuration.
This is filebeat configuration:
filebeat:
prospectors:
paths:
- D:\Logs\Platform_*
input_type:
spool_size: 1024
registry_file: "D:/ElasticSearch/filebeat1/registry"
output:
logstash:
hosts: ["10.98.25.182:9202"]
index: "st"
console:
pretty: true
shipper:
logging:
files:
path: D:/ElasticSearch/filebeat1/Logs
# i make it bigger for test
rotateeverybytes: 20971520 # = 20MB
level: debug
Seems like windows services don't like to have console on, after I remove console, it works well. In few minutes I already have all logs (122k) into elasticsearch. I'm going to have a beer, it was a terrible day.
As Windows seems not to be able to send to the console output, it also blocks the elasticsearch output. The output in the next major version of filebeat is already improved so the problem is much easier to detect.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.