Logstash stopped working (sincedb is not updated) after upgrade from 6.2.4 –> 6.4.0 (Update 6.5 doesn't work also)

Okay, I think(!) I got it working with local windows folders. I am checking some more. do you want me to send you the logs with the trace?

Yes but on the not working SMB case. Scrub any sensitive info first.

Okay, will get to doing it. by the way - the local directory doesn't seem to work either. the logs look more promising - it does give some collection logs (which doesn't happen with the network drives) but the sincedb doesn't get anything into it, it stays blank

Post logs for any cases that are not working as you'd expect. You can use Pastebin or Github Gists as well - drop links in the post.

Working on it.

Okay, took me some time to get the curl working on windows. it says both do not exist - not grok and not the date filter.

here is what the get gives me:

"host" : "win8nlpnew-001",
"version" : "6.4.0",
"http_address" : "127.0.0.1:9600",
"id" : "f31c8224-4aff-4b6c-bb2a-3c3e2e075561",
"name" : "win8nlpnew-001",
"loggers" : {
"filewatch.discoverer" : "WARN",
"filewatch.observingread" : "WARN",
"filewatch.readmode.processor" : "WARN",
"filewatch.sincedbcollection" : "WARN",
"logstash.agent" : "WARN",
"logstash.api.service" : "WARN",
"logstash.codecs.plain" : "WARN",
"logstash.config.source.local.configpathloader" : "WARN",
"logstash.config.source.multilocal" : "WARN",
"logstash.config.sourceloader" : "WARN",
"logstash.configmanagement.extension" : "WARN",
"logstash.filters.json" : "WARN",
"logstash.inputs.file" : "WARN",
"logstash.instrument.periodicpoller.deadletterqueue" : "WARN",
"logstash.instrument.periodicpoller.jvm" : "WARN",
"logstash.instrument.periodicpoller.os" : "WARN",
"logstash.instrument.periodicpoller.persistentqueue" : "WARN",
"logstash.modules.scaffold" : "WARN",
"logstash.modules.xpackscaffold" : "WARN",
"logstash.monitoringextension" : "WARN",
"logstash.monitoringextension.pipelineregisterhook" : "WARN",
"logstash.outputs.elasticsearch" : "WARN",
"logstash.pipeline" : "WARN",
"logstash.plugins.registry" : "WARN",
"logstash.runner" : "WARN",
"org.logstash.FieldReference" : "WARN",
"org.logstash.Logstash" : "WARN",
"org.logstash.execution.AbstractPipelineExt" : "WARN",
"org.logstash.execution.ShutdownWatcherExt" : "WARN",
"org.logstash.instrument.metrics.gauge.LazyDelegatingGauge" : "WARN",
"org.logstash.plugins.pipeline.PipelineBus" : "WARN",
"org.logstash.secret.store.SecretStoreFactory" : "WARN",
"slowlog.logstash.codecs.plain" : "WARN",
"slowlog.logstash.filters.json" : "WARN",
"slowlog.logstash.inputs.file" : "WARN",
"slowlog.logstash.outputs.elasticsearch" : "WARN"

Ha! I failed to mention that those "instructions" I FYI'd are generic ones I post here and in Github issues.

You need to do:

curl -XPUT 'localhost:9600/_node/logging?pretty' -H 'Content-Type: application/json' -d'
{
  "filewatch.discoverer" : "TRACE",
  "filewatch.observingread" : "TRACE",
  "filewatch.readmode.processor" : "TRACE",
  "filewatch.sincedbcollection" : "TRACE",
  "logstash.inputs.file" : "TRACE"
}
'

Hi,
Thanks!
I had to fly home on Thursday, so continuing the work today (in Israel weekends are Friday-Saturday).
Actually it needs to be logger.filewatch.discoverer etc.... but never mind, that was easy to understand :slightly_smiling_face:
The logs can be found here:
[https://drive.google.com/open?id=1mJEs8Z-gz5c_oWzLCHUEAykAmCYx4VFz]
I created two log files - one for the local directory, the other for the network. with the trace I saw that for the network the files start and stay in "ignored" mode. on the local directory it seems like the plugin understands what needs to be read but there is a crash inside, maybe because of the empty sincedb file, I am not sure.

Will be happy to hear your thoughts

Nice to see that you got that logging level stuff sorted.
However, I can see the Logstash folder you shared but not the files in it.

Oh... let me have a look

Sorry, my bad. the files are now on the same link

For the first one I think you should remove the leading / from path.

Pipeline_id:crashlog_beats
  Plugin: <LogStash::Inputs::File mode=>"read", start_position=>"beginning", path=>["/E:/CrashlogCollection/*.json"]

For the second the errors are due to the attempts to enable trace having a curl command appended to the JSON. Once trace is enabled things look normal.

Source: (byte)"{"filewatch.discoverer" : "TRACE","filewatch.observingread" : "TRACE","filewatch.readmode.processor" : "TRACE","filewatch.sincedbcollection" : "TRACE","logstash.inputs.file" : "TRACE"}curl -XPUT localhost:9600/_node/logging?pretty -H Content-Type:"

Yet it does not work.... I see that there are events any time I add a file, but sincedb remains empty

In that log file the size of the file never changes. It is always 317155. Having @sincedb_key 'unknown 0 0' for the second file is a bug.

[2019-02-24T12:46:20,063][TRACE][filewatch.discoverer ] discover_files {"count"=>2}
[2019-02-24T12:46:20,065][TRACE][filewatch.discoverer ] discover_files handling: {"new discovery"=>false, "watched_file details"=>"<FileWatch::WatchedFile: @filename='Test-20_0_0_0-gsbrad3_10_147_4_51-crash_18b8_2019-02-19_13-58-32-791.crashdata - Copy.json', @state='ignored', @recent_states='[:watched, :watched]', @bytes_read='317155', @bytes_unread='0', current_size='317155', last_stat_size='317155', file_open?='false', @initial=true, @sincedb_key='3669201093-79991-5308416 0 0'>"}
[2019-02-24T12:46:20,068][TRACE][filewatch.discoverer ] discover_files handling: {"new discovery"=>false, "watched_file details"=>"<FileWatch::WatchedFile: @filename='Test-20_0_0_0-gsbrad3_10_147_4_51-crash_18b8_2019-02-19_13-58-32-791.crashdata.json', @state='ignored', @recent_states='[:watched, :watched]', @bytes_read='317155', @bytes_unread='0', current_size='317155', last_stat_size='317155', file_open?='false', @initial=true, @sincedb_key='unknown 0 0'>"}

Correct, my files never change in size - that's why I was really looking forward for the "read" mode. but I did add a file to see how it behaves, also since there were files there and the since DB was 0, I expected it to collect the files that were not collected before

By the time trace was enabled it had read both files. You would need to add a file after enabling trace.

About the @sincedb_key = 'unknown 0 0' of file Test-20_0_0_0-gsbrad3_10_147_4_51-crash_18b8_2019-02-19_13-58-32-791.crashdata.json...
In another users issue I saw that if they edited the file (with notepad++) by adding a space, saving, removing the space and saving again, the file input was able to read the "inode" properly the second time. Do you see the same behaviour?
It would be really helpful to find what is different before and after the edit (permissions?) and the difference between this file (unedited) and the other file.

I don't think it is a permissions issue, as it started when we tried it on our existing sincedb which runs on the same computer but with logstash 6.2.4 which works, and the same doesn't work with 6.4.... so the file is writable.
the empty sincedb was just for the test to give you a smaller log....

At the same time there seems to be a difference in some files that give "unknown 0 0" and others that don't. At first I thought it might have something to do with two processes accessing the file at the same time (e.g locking) but the other case and yours are both in files that are "complete".

To test:

  1. Start LS with a new sincedb file and a file glob pattern that would discover the file but that the file is not in place yet
  2. Drop a copy of the file in place
  3. Watch the logs with discoverer at TRACE and see what "inode" it detects
  4. Stop Logstash
  5. Edit the copied file as mentioned before
  6. Repeat 1 through 4 again

Hi @guyboertje,
just to touch base - I had some busy days, will get to test it soon I hope