Logstash stopped working (sincedb is not updated) after upgrade from 6.2.4 –> 6.4.0 (Update 6.5 doesn't work also)

With some help from others, I have identified a fix for the unknown 0 0 "inode" problem.

I am debugging another "strangeness" on Windows now, so expect a bug fix release soon.

Oh! that's great news! looking forward for that! thanks!

@RonGros
Version 4.1.10 is released with fix for "unknown 0 0". Please verify fix works for you.
https://rubygems.org/gems/logstash-input-file/versions/4.1.10

@guyboertje thanks!!! is this only supposed to solve the empty file issue or other issues?

@guyboertje so I updated the input file to 4.1.10, and ran logstash just to get the same results. how can I make sure that logstash is actually using the correct versions? I could not see it in the logs

You are using start_position => "beginning" right?

correct

Ok. So now, what is the TRACE logging of filewatch.discoverer when you restart LS with an empty sincedb?

Not sure if this helps:

[2019-03-26T09:48:01,381][DEBUG][logstash.api.service     ] [api-service] start
[2019-03-26T09:48:01,482][TRACE][filewatch.discoverer     ] discover_files handling: {"new discovery"=>true, "watched_file details"=>"<FileWatch::WatchedFile: @filename='LocalTest-20_0_0_0-gsbrad3_10_147_4_51-crash_18b8_2019-02-19_13-58-32-791.crashdata.json', @state='watched', @recent_states='[:watched]', @bytes_read='0', @bytes_unread='0', current_size='317155', last_stat_size='317155', file_open?='false', @initial=true, @sincedb_key='unknown 0 0'>"}
[2019-03-26T09:48:01,491][TRACE][filewatch.sincedbcollection] associate: finding {"inode"=>"unknown", "path"=>"/E:/CrashlogCollection/LocalTest-20_0_0_0-gsbrad3_10_147_4_51-crash_18b8_2019-02-19_13-58-32-791.crashdata.json"}
[2019-03-26T09:48:01,502][TRACE][filewatch.sincedbcollection] associate: unmatched
[2019-03-26T09:48:01,515][TRACE][filewatch.discoverer     ] Discoverer discover_files: /E:/CrashlogCollection/LocalTest-20_0_0_0-gsbrad3_10_147_4_51-crash_18b8_2019-02-19_13-58-32-791.crashdata.json: skipping because it was last modified more than 0.0 seconds ago
[2019-03-26T09:48:01,538][TRACE][filewatch.discoverer     ] discover_files handling: {"new discovery"=>true, "watched_file details"=>"<FileWatch::WatchedFile: @filename='LocalTest-20_0_0_0-gsbrad3_10_147_4_51-crash_18b8_2019-02-19_13-58-32-792.crashdata - Copy.json', @state='watched', @recent_states='[:watched]', @bytes_read='0', @bytes_unread='0', current_size='317155', last_stat_size='317155', file_open?='false', @initial=true, @sincedb_key='unknown 0 0'>"}
[2019-03-26T09:48:01,541][TRACE][filewatch.sincedbcollection] associate: finding {"inode"=>"unknown", "path"=>"/E:/CrashlogCollection/LocalTest-20_0_0_0-gsbrad3_10_147_4_51-crash_18b8_2019-02-19_13-58-32-792.crashdata - Copy.json"}
[2019-03-26T09:48:01,542][TRACE][filewatch.sincedbcollection] associate: unmatched
[2019-03-26T09:48:01,543][TRACE][filewatch.discoverer     ] Discoverer discover_files: /E:/CrashlogCollection/LocalTest-20_0_0_0-gsbrad3_10_147_4_51-crash_18b8_2019-02-19_13-58-32-792.crashdata - Copy.json: skipping because it was last modified more than 0.0 seconds ago
[2019-03-26T09:48:01,547][TRACE][filewatch.discoverer     ] discover_files handling: {"new discovery"=>true, "watched_file details"=>"<FileWatch::WatchedFile: @filename='LocalTest-20_0_0_0-gsbrad3_10_147_4_51-crash_18b8_2019-02-19_13-58-32-792.crashdata.json', @state='watched', @recent_states='[:watched]', @bytes_read='0', @bytes_unread='0', current_size='317155', last_stat_size='317155', file_open?='false', @initial=true, @sincedb_key='unknown 0 0'>"}
[2019-03-26T09:48:01,549][TRACE][filewatch.sincedbcollection] associate: finding {"inode"=>"unknown", "path"=>"/E:/CrashlogCollection/LocalTest-20_0_0_0-gsbrad3_10_147_4_51-crash_18b8_2019-02-19_13-58-32-792.crashdata.json"}
[2019-03-26T09:48:01,550][TRACE][filewatch.sincedbcollection] associate: unmatched
[2019-03-26T09:48:01,551][TRACE][filewatch.discoverer     ] Discoverer discover_files: /E:/CrashlogCollection/LocalTest-20_0_0_0-gsbrad3_10_147_4_51-crash_18b8_2019-02-19_13-58-32-792.crashdata.json: skipping because it was last modified more than 0.0 seconds ago
[2019-03-26T09:48:01,625][TRACE][filewatch.readmode.processor] Watched processing
[2019-03-26T09:48:01,638][TRACE][filewatch.readmode.processor] Active processing
[2019-03-26T09:48:01,759][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2019-03-26T09:48:02,657][TRACE][filewatch.readmode.processor] Watched processing
[2019-03-26T09:48:02,659][TRACE][filewatch.readmode.processor] Active processing
[2019-03-26T09:48:03,661][TRACE][filewatch.readmode.processor] Watched processing
[2019-03-26T09:48:03,662][TRACE][filewatch.readmode.processor] Active processing
[2019-03-26T09:48:04,665][TRACE][filewatch.readmode.processor] Watched processing
[2019-03-26T09:48:04,666][TRACE][filewatch.readmode.processor] Active processing
[2019-03-26T09:48:05,669][TRACE][filewatch.readmode.processor] Watched processing
[2019-03-26T09:48:05,670][TRACE][filewatch.readmode.processor] Active processing
[2019-03-26T09:48:06,155][DEBUG][logstash.pipeline        ] Pushing flush onto pipeline {:pipeline_id=>"crashlog_beats", :thread=>"#<Thread:0x2d96c08e sleep>"}
[2019-03-26T09:48:06,339][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2019-03-26T09:48:06,341][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}

These should not be happening:
@sincedb_key='unknown 0 0' - is this v4.1.10?

[2019-03-26T09:48:01,515][TRACE][filewatch.discoverer ] Discoverer discover_files: /E:/CrashlogCollection/LocalTest-20_0_0_0-gsbrad3_10_147_4_51-crash_18b8_2019-02-19_13-58-32-791.crashdata.json: skipping because it was last modified more than 0.0 seconds ago - do you have ignore_older => 0?

Yes, I do.
I think it is 4.1.10, this is why I asked you if I can somehow make sure that this is what it ran, I installed 4.1.10 for sure....
What I did now is to rename the folder under
\logstash-6.4.0\vendor\bundle\jruby\2.3.0\gems\logstash-input-file-4.1.5
To another name, and re-ran logstash which, I hope, means it uses the logstash-input-file-4.1.10 directory
Is that correct?

Well you can do bin/logstash-plugin list --verbose

logstash-input-file (4.1.10)

That is weird.

hmmmm so what's next?

Please post your full LS config (redact sensitive info).

https://drive.google.com/open?id=1WMeH3tdXMIU2T-Q82bOl7HSoeKrUZoHC

That's the log, do you need the config?

Here is the config, it is very simple:

input {
	file {
		path => ["/E:/CrashlogCollection/*.json"]
		mode => read
		start_position => beginning
		type => "json"
		ignore_older => 0
		sincedb_path => "E:\CrashlogCollection\sincedb"
		close_older => 1
		max_open_files => 50000
	}
}
filter {
	json {
		source => "message"
	}
}
output {
	elasticsearch {
		action => index
		index => crashlog
	}
}

Thanks.

I think path should be:
path => ["E:/CrashlogCollection/*.json"]
or
path => "E:/CrashlogCollection/*.json" (an array is only needed if you have multiple files or glob patterns)

Try a different ignore_older value. Effectively, ignore_older => 0 means ignore files that are older that 0 seconds - i.e. everything.
ignore_older => "5 weeks" (you can use string based 'durations' now, "45s", "1 day")