Logstash has suddenly stopped reading input files

Hello everyone,

I have an interesting one. I have a working v7.7 logstash config (on RHEL 7.x) which has suddenly stopped ingesting files. I am using the file input plugin in read mode to ingest single-use CSV files (each about 2 lines - a header and a data line).

I suspect this has to do with the recently increased number in the numbers being received. (from 6 digits to 8-9 digit numbers). This fixes itself for a bit for a short period and jams again.

I have looked into the usual places, without luck. Any suggestions on what I should be looking into?

Does the permission of the file change in RHEL7 (so logstash can't read anymore?) or any FIM blocking it?
What's your config (inputs and filters)

No, the usual checks I have mentioned include file permission checks. Also, forgot to add that the logstash log files are oddly silent (other than the odd unrelated warning or two).

I will post my config on Monday.

My logstash config is as follows (I have multiple file inputs nearly identical spec):

    file {
      path = "/what/ever/*.csv"
      file_completed_action => "log_and_delete"  
      file_completed_log_path => "/var/log/logstash/whatever1_files_processed.log"
      mode => read
      tags => ["sometag1", "sometag2"]
    }
    filter {
        // stuff that works
    }

Worth reiterating,

  • the files I am reading are single-use CSVs which are about 2 lines long each.
  • once processed, additional content will NOT be added to the file
  • the number of CSV files is very high (6 digits and above)

Also, following the theory that this issue may be related to number of files I have made the following changes to various logstash config files:

  • /etc/systemd/system/logstash.service: LimitNOFILE=50000
  • /etc/default/logstash: LS_OPEN_FILES="50000"

I have made sure that these are in effect by checking /proc/<pid>/limits

Any other suggestions?

I wonder if you are hitting the issue that this PR fixes.

1 Like

Does sound likely. Unfortunately, it is not feasible to quickly upgrade the logstash-input-file plugin.

But this has given me a few ideas to try. Will do this and share the results. Thanks again.

Right, so I tried trickle feeding the files in from batches as small as 10K, stepping up progressively in 1K, 5K and 10K batches. Oddly enough, I have started noticing some odd behaviour

  • Sometimes I need logstash to be restarted for the new files to be picked up
  • Oddly enough, accessing the <path.data>/plugins/inputs/file/.sincedb_* files with commands line wc -l seems to help - I have no idea why?! :face_with_raised_eyebrow:
  • The above steps stop working completely once the files cross 70K or so. Removing the processed files does not help. The only solution in this case is to remove the sincedb files. Obviously this is not a practical solution - only an effort intensive workaround :unamused:

Also, back to the the PR mentioned previously. Specifically, the bin/logstash-plugin update logstash-input-file bit. Any ideas how I can do this in an offline mode? The Elastic does not have access to the internet. Is there an offline way to do this?

Thanks.

On the question of offline update to plugins, I spotted this in the docs. According to which, one can prepare and install offline packs for one/more plugin install as follows:

# prep an offline pack
bin/logstash-plugin prepare-offline-pack --output /path/to/logstash-input-file.zip --overwrite logstash-input-file
# install the offline pack on the target environment
bin/logstash-plugin install file:////path/to/logstash-input-file.zip

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.