Issue with logstash

When logstash restarts it pushes latest log. Other time it does not push logs.
I restarted logstash on feb 7. I see only feb 7 log event in Elasticsearch even today. There is no errrors reported in logstash-plain.log

logstash runs as a service in windows VM. I don't have any settings apart from java heap memory min and max set to 4G.

Below is the logstash configuration

input {
  file {
    path => "C:/Program Files (x86)/oracle/DB/Logs/eventparser_**.log"
    type => "parser"
    start_position => "beginning"
  }
  file {
   path => "C:/Program Files (x86)/oracle/DB/Logs/server_**.log"
   type => "server"
   start_position => "beginning"
  }
  file {
    path => "C:/Program Files (x86)/oracle/oracle/Server/logs/oracle.log"
    type => "oracle"
    codec => multiline {
      pattern => "^%{TIMESTAMP_ISO8601}"
      negate => true
      what => "previous"
      charset => "ISO-8859-1"
    }
    start_position => "beginning"
  }
}

filter {
  fingerprint {
    method => "SHA1"
  }
}

filter {
  if [type] == "Replication" {
    grok {
      match => {"message" => "^%{INT:logtimestamp}%{GREEDYDATA:message}"}
      overwrite => [ "message"]
    }
    mutate {
      remove_field => [ "logtimeStamp" ]
    }
  }
  if [type] == "server" {
    grok {
      match => {"message" => "\[%{HTTPDATE:logtimeStamp}\] %{IP:hostip} %{URIPROTO:method} %{URIPATH:post-data} (?:%{NOTSPACE:queryparam}|-) %{NUMBER:useragent} %{NUMBER:responsestatus} \[%{GREEDYDATA:message}\] - %{NUMBER:time-taken:int}"}
      overwrite => [ "message"]
    }
    mutate {
      remove_field => [ "logtimeStamp" ]
    }
  }
  if [type] == "oracle" {
    mutate {
      gsub => [
        "message", "\[\] ", " ",
        "message", "\- ", " ",
        "message", "\s+", " "
      ]
    }
    mutate {
      strip => ["message"]
    }
    grok {
      match => {"message" => ["%{TIMESTAMP_ISO8601:logtimeStamp} %{WORD:loglevel} \[%{USERNAME:httpcall}] %{USERNAME:dbName} %{USERNAME:tenantGuid} %{INT:tenantId} %{INT:userId} %{USERNAME:sessionID} %{GREEDYDATA:message}",
                              "%{TIMESTAMP_ISO8601:logtimeStamp} %{WORD:loglevel} %{GREEDYDATA:message}" ]}
      overwrite => [ "message" ]
    }
    mutate {
      remove_field => [ "logtimeStamp" ]
    }
  }
  if [type] == "Replication" {
    grok {
      match => {"message" => "%{DATA:logtimeStamp} %{WORD:operation} %{GREEDYDATA:message}"}
      overwrite => [ "message"]
    }
    mutate {
      remove_field => [ "logtimeStamp" ]
    }
  }
}


output {
  if [type] == "db" {
    elasticsearch {
      ecs_compatibility => "disabled"
      hosts => ["https://${***************}:443"]
      ssl => true
      index => "oracle-%{+YYYY.MM.dd}"
      legacy_template => false
      default_server_major_version => 2
	    document_id => "%{fingerprint}" 
    }
  } else {
    elasticsearch {
      ecs_compatibility => "disabled"
      hosts => ["https://${*************}:443"]
      ssl => true
      index => "log-prod-%{+YYYY.MM.dd}"
      legacy_template => false
      default_server_major_version => 2
      document_id => "%{fingerprint}" 
    }
  }
}

please help

If you enable log.level trace then the filewatch module will tell you what it is doing: which files it is monitoring, how big they are, how much of them it has read, etc. This logging is voluminous! Read through this thread to get a feel for what it is logging.

Thank you @Badger.

[2025-02-17T17:19:21,262][TRACE][filewatch.tailmode.processor][main][4be0a723739974e5898e7b37e87ddd3a7119032b9f7486db037d50cc9afe7376] process_delayed_delete
[2025-02-17T17:19:21,264][TRACE][filewatch.tailmode.processor][main][4be0a723739974e5898e7b37e87ddd3a7119032b9f7486db037d50cc9afe7376] process_restat_for_watched_and_active
[2025-02-17T17:19:21,264][TRACE][filewatch.tailmode.processor][main][4be0a723739974e5898e7b37e87ddd3a7119032b9f7486db037d50cc9afe7376] process_rotation_in_progress
[2025-02-17T17:19:21,264][TRACE][filewatch.tailmode.processor][main][4be0a723739974e5898e7b37e87ddd3a7119032b9f7486db037d50cc9afe7376] process_watched
[2025-02-17T17:19:21,264][TRACE][filewatch.tailmode.processor][main][4be0a723739974e5898e7b37e87ddd3a7119032b9f7486db037d50cc9afe7376] process_active
[2025-02-17T17:19:21,264][TRACE][filewatch.tailmode.processor][main][4be0a723739974e5898e7b37e87ddd3a7119032b9f7486db037d50cc9afe7376] process_active no change {:path=>"localhost_access_log.log"}
[2025-02-17T17:19:21,572][TRACE][filewatch.tailmode.processor][main][8fa304b7fab3c7857089c259d3ebc8974c785b8bfd996a820679bb524f7a8205] process_closed
[2025-02-17T17:19:21,572][TRACE][filewatch.tailmode.processor][main][8fa304b7fab3c7857089c259d3ebc8974c785b8bfd996a820679bb524f7a8205] process_ignored
[2025-02-17T17:19:21,573][TRACE][filewatch.tailmode.processor][main][8fa304b7fab3c7857089c259d3ebc8974c785b8bfd996a820679bb524f7a8205] process_delayed_delete
[2025-02-17T17:19:21,573][TRACE][filewatch.tailmode.processor][main][8fa304b7fab3c7857089c259d3ebc8974c785b8bfd996a820679bb524f7a8205] process_restat_for_watched_and_active
[2025-02-17T17:19:21,573][TRACE][filewatch.tailmode.processor][main][8fa304b7fab3c7857089c259d3ebc8974c785b8bfd996a820679bb524f7a8205] process_rotation_in_progress
[2025-02-17T17:19:21,573][TRACE][filewatch.tailmode.processor][main][8fa304b7fab3c7857089c259d3ebc8974c785b8bfd996a820679bb524f7a8205] process_watched
[2025-02-17T17:19:21,573][TRACE][filewatch.tailmode.processor][main][8fa304b7fab3c7857089c259d3ebc8974c785b8bfd996a820679bb524f7a8205] process_active
[2025-02-17T17:19:21,573][TRACE][filewatch.tailmode.processor][main][8fa304b7fab3c7857089c259d3ebc8974c785b8bfd996a820679bb524f7a8205] process_active file grew: new size is 9742205, bytes read 9742062 {:path=>"db.log"}
[2025-02-17T17:19:21,573][TRACE][filewatch.tailmode.handlers.grow][main][8fa304b7fab3c7857089c259d3ebc8974c785b8bfd996a820679bb524f7a8205] handling: {:path=>"C:/Program Files (x86)/oracle/db/Server/logs/db.log"}
[2025-02-17T17:19:21,573][TRACE][filewatch.tailmode.handlers.grow][main][8fa304b7fab3c7857089c259d3ebc8974c785b8bfd996a820679bb524f7a8205] controlled_read {:iterations=>1, :amount=>143, :filename=>"db.log"}
[2025-02-17T17:19:21,573][DEBUG][filewatch.tailmode.handlers.grow][main][8fa304b7fab3c7857089c259d3ebc8974c785b8bfd996a820679bb524f7a8205] controlled_read get chunk
[2025-02-17T17:19:21,574][DEBUG][logstash.inputs.file     ][main][8fa304b7fab3c7857089c259d3ebc8974c785b8bfd996a820679bb524f7a8205] Received line {:path=>"C:/Program Files (x86)/oracle/db/Server/logs/db.log", :text=>"2025-02-17T17:19:21,523 INFO  [https-jsse-nio-443-exec-61] servlet.ReadyStatusServlet    - doGet call for ReadyStatusServlet. Calling doPost.\r"}
[2025-02-17T17:19:21,574][DEBUG][logstash.codecs.multiline][main][8fa304b7fab3c7857089c259d3ebc8974c785b8bfd996a820679bb524f7a8205] Multiline {:text=>"2025-02-17T17:19:21,523 INFO  [https-jsse-nio-443-exec-61] servlet.ReadyStatusServlet    - doGet call for ReadyStatusServlet. Calling doPost.\r", :pattern=>"^%{TIMESTAMP_ISO8601}", :match=>true, :negate=>true}

I have not mentioned since_db path is it ok?

Thank you @badger.

please find the debug logs.

[2025-02-17T17:19:21,262][TRACE][filewatch.tailmode.processor][main][4be0a723739974e5898e7b37e87ddd3a7119032b9f7486db037d50cc9afe7376] process_delayed_delete
[2025-02-17T17:19:21,264][TRACE][filewatch.tailmode.processor][main][4be0a723739974e5898e7b37e87ddd3a7119032b9f7486db037d50cc9afe7376] process_restat_for_watched_and_active
[2025-02-17T17:19:21,264][TRACE][filewatch.tailmode.processor][main][4be0a723739974e5898e7b37e87ddd3a7119032b9f7486db037d50cc9afe7376] process_rotation_in_progress
[2025-02-17T17:19:21,264][TRACE][filewatch.tailmode.processor][main][4be0a723739974e5898e7b37e87ddd3a7119032b9f7486db037d50cc9afe7376] process_watched
[2025-02-17T17:19:21,264][TRACE][filewatch.tailmode.processor][main][4be0a723739974e5898e7b37e87ddd3a7119032b9f7486db037d50cc9afe7376] process_active
[2025-02-17T17:19:21,264][TRACE][filewatch.tailmode.processor][main][4be0a723739974e5898e7b37e87ddd3a7119032b9f7486db037d50cc9afe7376] process_active no change {:path=>"localhost_access_log.log"}
[2025-02-17T17:19:21,572][TRACE][filewatch.tailmode.processor][main][8fa304b7fab3c7857089c259d3ebc8974c785b8bfd996a820679bb524f7a8205] process_closed
[2025-02-17T17:19:21,572][TRACE][filewatch.tailmode.processor][main][8fa304b7fab3c7857089c259d3ebc8974c785b8bfd996a820679bb524f7a8205] process_ignored
[2025-02-17T17:19:21,573][TRACE][filewatch.tailmode.processor][main][8fa304b7fab3c7857089c259d3ebc8974c785b8bfd996a820679bb524f7a8205] process_delayed_delete
[2025-02-17T17:19:21,573][TRACE][filewatch.tailmode.processor][main][8fa304b7fab3c7857089c259d3ebc8974c785b8bfd996a820679bb524f7a8205] process_restat_for_watched_and_active
[2025-02-17T17:19:21,573][TRACE][filewatch.tailmode.processor][main][8fa304b7fab3c7857089c259d3ebc8974c785b8bfd996a820679bb524f7a8205] process_rotation_in_progress
[2025-02-17T17:19:21,573][TRACE][filewatch.tailmode.processor][main][8fa304b7fab3c7857089c259d3ebc8974c785b8bfd996a820679bb524f7a8205] process_watched
[2025-02-17T17:19:21,573][TRACE][filewatch.tailmode.processor][main][8fa304b7fab3c7857089c259d3ebc8974c785b8bfd996a820679bb524f7a8205] process_active
[2025-02-17T17:19:21,573][TRACE][filewatch.tailmode.processor][main][8fa304b7fab3c7857089c259d3ebc8974c785b8bfd996a820679bb524f7a8205] process_active file grew: new size is 9742205, bytes read 9742062 {:path=>"db.log"}
[2025-02-17T17:19:21,573][TRACE][filewatch.tailmode.handlers.grow][main][8fa304b7fab3c7857089c259d3ebc8974c785b8bfd996a820679bb524f7a8205] handling: {:path=>"C:/Program Files (x86)/oracle/db/Server/logs/db.log"}
[2025-02-17T17:19:21,573][TRACE][filewatch.tailmode.handlers.grow][main][8fa304b7fab3c7857089c259d3ebc8974c785b8bfd996a820679bb524f7a8205] controlled_read {:iterations=>1, :amount=>143, :filename=>"db.log"}
[2025-02-17T17:19:21,573][DEBUG][filewatch.tailmode.handlers.grow][main][8fa304b7fab3c7857089c259d3ebc8974c785b8bfd996a820679bb524f7a8205] controlled_read get chunk
[2025-02-17T17:19:21,574][DEBUG][logstash.inputs.file     ][main][8fa304b7fab3c7857089c259d3ebc8974c785b8bfd996a820679bb524f7a8205] Received line {:path=>"C:/Program Files (x86)/oracle/db/Server/logs/db.log", :text=>"2025-02-17T17:19:21,523 INFO  [https-jsse-nio-443-exec-61] servlet.ReadyStatusServlet    - doGet call for ReadyStatusServlet. Calling doPost.\r"}
[2025-02-17T17:19:21,574][DEBUG][logstash.codecs.multiline][main][8fa304b7fab3c7857089c259d3ebc8974c785b8bfd996a820679bb524f7a8205] Multiline {:text=>"2025-02-17T17:19:21,523 INFO  [https-jsse-nio-443-exec-61] servlet.ReadyStatusServlet    - doGet call for ReadyStatusServlet. Calling doPost.\r", :pattern=>"^%{TIMESTAMP_ISO8601}", :match=>true, :negate=>true}

I have not set the since_db path. Is it causing issue?

Do you think since_db is the issue? I have not set since_db ?

The sincedb is an in-memory database used to record how much of each file logstash has read. If you set the sincedb_path option then that database is persisted across logstash restarts. It is not likely to prevent logstash reading new data in a file.

The log file lines that you posted show that the file input noticed that "C:/Program Files (x86)/oracle/db/Server/logs/db.log" had grown by 143 bytes and it read that data (text=>"2025-02-17T17:19:21,523 INFO.... Calling doPost.\r") and passed it to the multiline codec.

The codec matched it's pattern, so now it will keep appending lines that don't match the pattern until it finds another line that does match, at which point it will flush all the lines it has joined together.

If these are low volume logs you may want to set auto_flush_interval on the codec.

In my case the multine codec are exceptions spans around 300 to 500 lines.
Are you asking me to set auto_flush_interval to 1. Please let me know how is this going to help me?

I have added auto_flush_interval => 1 to input plugin. Will keep you posted on how this is helping me to logstash log ingestion. Thank you once again for your time.

@Badger please find the screen. This is the issue which I was talking about.
After 24 hours of logstash restart it sends data of feb 17th.