in filebeat 1.1 is a "bug" in the
bulk_max_size setting. The "bug" is, the default value is 1024 and not 2048. You can change the
bulk_max_size setting for the logstash output in you config file. Filebeat employs send-at-least-once semantics. That is, events not send in a batch or events failed to be send will be re-send. Logstash also supports partial ACKing batches. That is, upon failure only event not yet ACKed will be send again.
The window size mean, the batch of 2048 events is going to be split into sub-batches. These sub-batches are going to be send. The message is written before sending the events, not after. If sending fails it's at least Warn or Error level.
window can shrink and increase up to
bulk_max_size. The window is used for some historical reasons, trying not to overload logstash with too many events in case of failures, as original logstash-forwarder (original lumberjack protocol) has had no keep-alive mechanism. That is, beats did do some slow-start trying to figure out if it can really send a full batch up to
bulk_max_size or rather send smaller batches, giving logstash a chance to report back in time. Granted, it's still a little wonky, but actually did help improving at least some edge cases.
When updating to recent 5.x, the logstash-input-beats plugin has been finally rewritten in Java using netty. This gives us support for asynchronous 'batch-in-progress-signal' from Logstash->beats, making the slow-start somewhat superfluous. But it's still used in case of beats being used with older Logstash instances.