Logstash 6.5.1 : undefined method `path' for nil:NilClass


(Gilles Figoni) #1

I have an ELK stack installed on my workstation (windows 10).
I recently migrated my ELK stack from v6.3.2 to the latest v6.5.1.

On this local stack, I daily handle a fair amount of log files throw Logstash to Elastic.

I had no problem with Logstash 6.3.2, but using Logstash 6.5.1 generate this error repeatedly : the input file plugin crashes and reboot all the time during the process :

[2018-11-28T09:48:02,531][ERROR][logstash.pipeline        ] A plugin had an unrecoverable error. Will restart this plugin.
  Pipeline_id:main
  Plugin: <LogStash::Inputs::File mode=>"read", path=>["D:/path/**/file*"], codec=><LogStash::Codecs::Multiline pattern=>"^%{LOGLEVEL}\\|", charset=>"Windows-1252", what=>"previous", id=>"9ecca431-85c4-4be7-bc85-b7a5802e1d55", max_lines=>1000000000, negate=>true, enable_metric=>true, multiline_tag=>"multiline", max_bytes=>10485760>, add_field=>{"type"=>"ate-bootstrap"}, file_sort_by=>"path", id=>"d004ada1a7083053ed9583a14313b887d99e0160fcaa208b6dae8adbe1ac3aad", sincedb_path=>"D:/path/sincedb", file_completed_action=>"log", file_completed_log_path=>"D:/path/conf.log", enable_metric=>true, stat_interval=>1.0, discover_interval=>15, sincedb_write_interval=>15.0, start_position=>"end", delimiter=>"\n", close_older=>3600.0, sincedb_clean_after=>1209600.0, file_chunk_size=>32768, file_chunk_count=>140737488355327, file_sort_direction=>"asc">
  Error: undefined method `path' for nil:NilClass
  Exception: NoMethodError
  Stack: C:/tools/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-file-4.1.6/lib/filewatch/sincedb_value.rb:71:in `reading_completed'
C:/tools/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-file-4.1.6/lib/filewatch/sincedb_collection.rb:165:in `reading_completed'
C:/tools/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-file-4.1.6/lib/filewatch/read_mode/handlers/read_file.rb:22:in `handle_specifically'
C:/tools/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-file-4.1.6/lib/filewatch/read_mode/handlers/base.rb:26:in `handle'
C:/tools/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-file-4.1.6/lib/filewatch/read_mode/processor.rb:35:in `read_file'
C:/tools/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-file-4.1.6/lib/filewatch/read_mode/processor.rb:102:in `block in process_active'
org/jruby/RubyArray.java:1734:in `each'
C:/tools/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-file-4.1.6/lib/filewatch/read_mode/processor.rb:86:in `process_active'
C:/tools/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-file-4.1.6/lib/filewatch/read_mode/processor.rb:45:in `process_all_states'
C:/tools/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-file-4.1.6/lib/filewatch/watch.rb:67:in `iterate_on_state'
C:/tools/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-file-4.1.6/lib/filewatch/watch.rb:45:in `subscribe'
C:/tools/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-file-4.1.6/lib/filewatch/observing_read.rb:12:in `subscribe'
c:/tools/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-file-4.1.6/lib/logstash/inputs/file.rb:339:in `run'
c:/tools/logstash/logstash-core/lib/logstash/pipeline.rb:409:in `inputworker'
c:/tools/logstash/logstash-core/lib/logstash/pipeline.rb:403:in `block in start_input'
[2018-11-28T09:48:03,562][INFO ][filewatch.observingread  ] QUIT - closing all files and shutting down.
[2018-11-28T09:48:03,562][INFO ][filewatch.observingread  ] START, creating Discoverer, Watch with file and sincedb collections

I adapted my Logstash conf to use the recent "read mode" feature.

Here is my 6.3.2 conf part for the input plugin :

    file {
		path => "D:/path/**/file.*"
		sincedb_path => "D:/path/sincedb"
		start_position => "beginning"
		ignore_older => 0
		close_older => 10
		
		codec => multiline {
			max_lines => 1000000000
			pattern => "^%{LOGLEVEL}\|"
			negate => true
			what => previous
			charset => "Windows-1252"
		}
	}

And the adaptation for v6.5.1 :

	file {
		path => "D:/path/**/file.*"
		sincedb_path => "D:/path/sincedb"
		mode => "read"
		file_completed_action => "log"
		file_completed_log_path => "D:/path/logstash.log"
		file_sort_by => "path"

		codec => multiline {
			max_lines => 1000000000
			pattern => "^%{LOGLEVEL}\|"
			negate => true
			what => previous
			charset => "Windows-1252"
		}
	}

The rest of the configuration is unchanged.

I had to fallback to my previous logstash version, the 6.5.1 is unusable in my context.

Thanks for your help!


(Lewis Barclay) #2

Have you tried removing the wildcard star at the end of the file (assuming that its not just redacted) and putting that actual extension just for a test?


(Gilles Figoni) #3

the path is not really the one I quoted... I obfuscated some data, but I have to handle files following this pattern : xxx[.yyyy-mm-dd-hh]?
So the '*' at the end of my path is required.


(Lewis Barclay) #4

I know :slight_smile: I'm asking if you can test it without the wildcard, so an absolute path, just to test!


(Gilles Figoni) #5

The problem is that in fact, logstash manages to handle my files, but once a while (several times per treatment), the file plugin crashes and reboots... but if it crashes too often, it slows down the process significantly.
It seems the crash is not tied to a specific file: I enabled a debug log level, and the plugin crashes while handling different files each time I launch the treatment on the same set of log files.

As you asked, I tried with an absolute path:
path => "D:/absolute/path/file.log"
but doing this it handles only one file... And the plugin didn't crash.


(Lewis Barclay) #6

OK, so we have established that a newer version of logstash does not appear to like the wildcard option anymore, perhaps you should raise a bug ticket for this?

Another option would be filebeat, have you considered using it?


(Gilles Figoni) #7

I don't really see the correlation: the file plugin seems to crash randomly when handling lots of files. The fact it doesn't crash on just one unique file doesn't imply the problem is linked to the wildcard option?
However, I'll raise a bug.

Filebeat could be an option, but it complicates my architecture: the files are all local to the machine where ELK is installed.


(Lewis Barclay) #8

You don't see the correlation? With wildcard you said logstash was not starting in the first example, with a single file it works?

I wouldn't call filebeat a complication, its tiny and performs extremely well. However I can understand.


(Charles QW Smith) #9

Hi, Gilles. I am getting the same stack trace. I suspect a bad interaction between mode -> "read" and sincedb database updates. Specifically, I think that the mode -> "read" feature is closing a file and setting @watched_file to null just before a sincedb database update.

Your path etc. are all fine. I believe that it is indeed supposed to be an array, but seems to work either way. And for sure this is NOT the cause of the problem.

One more observation from my environment: when the plugin crashes, it does not restart gracefully and leaves plenty of files unprocessed. This may be why you are being led to believe there is a problem with your file paths. Again, your file path is NOT the problem.

I am able to work around it in my environment by effectively turning off sincedb using this in my config:
file {
...
sincedb_write_interval => 157680000
}

I agree that filebeat is not necessary for your use case and adds unnecessary complexity. I was facing the same choice and I am going out of my way to avoid using it because it's not necessary now that the mode -> "read" feature exists.

Good luck.


(Yassine) #10

I am getting the same error when using read mode with logstash 6.5.2
The input is a folder with many files
The error occure when a file is closed to process the next file

[2018-12-10T11:34:11,401][ERROR][logstash.pipeline        ] A plugin had an unrecoverable error. Will restart this plugin.
  Pipeline_id:main
  Plugin: <LogStash::Inputs::File start_position=>"end", mode=>"read", path=>["/data/UnProcessed/*.csv"], id=>"762db17bc413ff065e337013d441705f6ea3d6f22430824f7b419c3e63bb35dc", sincedb_path=>"/data/Logs/since_db.db", file_completed_action=>"log", file_completed_log_path=>"/data/Logs/files.log", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_8a44568b-6c4f-4cbe-8657-4d92007a343e", enable_metric=>true, charset=>"UTF-8">, stat_interval=>1.0, discover_interval=>15, sincedb_write_interval=>15.0, delimiter=>"\n", close_older=>3600.0, sincedb_clean_after=>1209600.0, file_chunk_size=>32768, file_chunk_count=>140737488355327, file_sort_by=>"last_modified", file_sort_direction=>"asc">
  Error: undefined method `path' for nil:NilClass
  Exception: NoMethodError
  Stack: C://logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-file-4.1.8/lib/filewatch/sincedb_value.rb:71:in `reading_completed'
C://logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-file-4.1.8/lib/filewatch/sincedb_collection.rb:165:in `reading_completed'
C://logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-file-4.1.8/lib/filewatch/read_mode/handlers/read_file.rb:22:in `handle_specifically'
C://logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-file-4.1.8/lib/filewatch/read_mode/handlers/base.rb:26:in `handle'
C://logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-file-4.1.8/lib/filewatch/read_mode/processor.rb:35:in `read_file'
C://logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-file-4.1.8/lib/filewatch/read_mode/processor.rb:104:in `block in process_active'
org/jruby/RubyArray.java:1734:in `each'
C://logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-file-4.1.8/lib/filewatch/read_mode/processor.rb:88:in `process_active'
C://logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-file-4.1.8/lib/filewatch/read_mode/processor.rb:45:in `process_all_states'
C://logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-file-4.1.8/lib/filewatch/watch.rb:67:in `iterate_on_state'
C://logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-file-4.1.8/lib/filewatch/watch.rb:45:in `subscribe'
C://logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-file-4.1.8/lib/filewatch/observing_read.rb:12:in `subscribe'
C://logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-file-4.1.8/lib/logstash/inputs/file.rb:339:in `run'
C://logstash/logstash-core/lib/logstash/pipeline.rb:409:in `inputworker'
C://logstash/logstash-core/lib/logstash/pipeline.rb:403:in `block in start_input'
[2018-12-10T11:34:12,486][INFO ][filewatch.observingread  ] QUIT - closing all files and shutting down.
[2018-12-10T11:34:12,518][INFO ][filewatch.observingread  ] START, creating Discoverer, Watch with file and sincedb collections

(Piers B) #11

I had similar problem. I think it may be related to Windows and using drive letters (i.e. "D:") in the filename or pattern. I was battling a similar problem with the "path.config" configuration value not being accepted if it contained a drive letter. Making everything on the same drive, I was able to eliminate the drive letter from the configuration file and no longer encounter the exception. It appears to be a defect in some areas of the code when dealing with Windows paths.