I am using an ELK stack in my local machine . I have tested sending log events from variety of inputs like file, stdin, filebeat and outputs like Elasticsearch, stdout and tried variety of combinations but the Outputs always has logs missing mostly the last event of the file.
I have also tested without using any filter and the problem still persists.
Sometimes when I close the logstash using Ctrl + C only then it sends only one of the log not previously sent.
I am on Windows and all ELK versions are 7.15.0
How to ensure all the logs from all the input sources are sent to Elasticsearch ?
Similar issue was occurring because multiline codec was used for input. Could you paste the config here please?
The multiline codec will collapse multiline messages and merge them into a single event.
If you are using a Logstash input plugin that supports multiple hosts, such as the beats input plugin, you should not use the multiline codec to handle multiline events. Doing so may result in the mixing of streams and corrupted event data. In this situation, you need to handle multiline events before sending the event data to Logstash. Multiline codec plugin | Logstash Reference [8.11] | Elastic
Path of the sincedb database file (keeps track of the current position of monitored log files) that will be written to disk. The default will write sincedb files to <path.data>/plugins/inputs/file NOTE: it must be a file path and not a directory path
@Pratyush_Rath, If it is the last event of the file that it is not being sent, then the reason is probably caused by the fact that this event doesn't end in a line break.
Losgtash (and filebeat) uses line break characters to know when an event endend, if the last line does not have this character, it will not be seen as an event and will not be sent.
How are the files created? Depending on how the files are created you can change the default reading mode from tail to read. The tail mode tracks the file for new changes, and the last line will only be sent if it also has a line break character in the end, the read mode will read the entire file until EOF. The documentation describes how this work.
@FALEN, in this case the NUL is the windows equivalent of /dev/null, so sincedb_path => "NUL" means that the input won't use sincedb and the file will be reread every time logstash is restarted.
When there are lot of files to tail at the same time (e.g. I have lots of files in a folder which is input for logstash) it missed some of the small files.
Can this be solved in logstash itself or should I use filebeat? Because I am using multiline codec and it is recommended in documentation.
Are they being constantly written by another application or when they are written just once? If they are written just once you can change the reading mode of the logstash file input, if they are constantly being written you will need to use the default mode, which is tail.
As I said, both logstash an filebeat needs an line break character in the end of each line to know that the event has ended.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.