The logs aren't read by Logstash deployed with Docker after rotating

I'm using log4j to write logs to files, and whenever the log rotates (compressed to a .gz file), the new log file is never read by Logstash (using version 7.14.4, on MacOS 12.5).

I think it's important to track the inode number and sincedb file, so here are my records.

The ls -li result of the log folder was:

7435564 -rw-r--r--  1 kent  staff   137279  8  9 12:00 karaf.log

and after the log rotation:

7436195 -rw-r--r--  1 kent  staff    39322  8  9 12:01 karaf_2022-08-09.7.log.gz
7436188 -rw-r--r--  1 kent  staff   339420  8  9 12:01 karaf.log

However, in the folder (mounts the log folder) of the Logstash Docker container,

before:

7430919 -rw-r--r-- 1 logstash logstash  512008 Aug  9 03:47 karaf.log

after:

7436195 -rw-r--r-- 1 logstash logstash   39322 Aug  9 04:01 karaf_2022-08-09.7.log.gz
7430919 -rw-r--r-- 1 logstash logstash  512008 Aug  9 03:47 karaf.log

before and after rotation the sincedb are both

7430919 0 123 512008 1660016822.58378 /usr/share/logstash/proj_log/karaf.log

I think the log isn't read is because the inode number doesn't change, so Logstash keeps waiting for something newer than the current byte offset (1660016822.58378).

Does anyone know how to solve this problem?

Here's my logstash.conf

input {
  file {
    path => ["/usr/share/logstash/proj_log/karaf*.log"]
    start_position => "beginning"
    codec => multiline {
      pattern => "^\D"
      what => "previous"
    }
  }
  file {
    path => ["/usr/share/logstash/proj_log/karaf*.log.gz"]
    start_position => "beginning"
    mode => "read"
    codec => multiline {
      pattern => "^\D"
      what => "previous"
    }
  }
}

filter {
  grok {
    match => {
      "message" => "(?<timestamp>^\S[^\|]*\S)\s*\|\s*(?<level>\S[^\|]*\S)\s*\|\s*(?<thread>\S[^\|]*\S)\s*\|\s*(?<logger>\S[^\|]*\S)\s*\|\s*(?<bundle>\S[^\|]*\S)\s*\|\s*(?<msg>\S*.*)"
    }
    remove_field => ["message"]
  }

  fingerprint {
    concatenate_sources => true
    source => ["timestamp", "msg"]
    method => "MD5"
  }

  date {
    match => [ "timestamp", "ISO8601" ]
    remove_field => ["timestamp"]
    timezone => "Asia/Taipei"
  }
}

output {
  elasticsearch {
    hosts => ["elasticsearch:9200"]
    index => "karaf-%{+YYYY.MM.dd}"
    document_id => "%{fingerprint}"
  }
} 

Does anyone know how to fix it?

I just found another clue:

After the log file rotates, the contents of the log file (in my case, it is karaf.log) in the Logstash Docker container aren't the same as the contents of the log file in the folder mounted by the Logstash Docker container.

In short, after the log rotates, the file in the mounted folder remains with the old contents.

Therefore, unless I restart the Logstash container, the "new" karaf.log won't be read.

I think it might be a Docker issue. Does anyone met this problem before and know how to solve it?

Thanks.

Here's my logstash service at docker-compose.yml

logstash:
  image: docker.elastic.co/logstash/logstash:7.17.4
  container_name: logstash
  links:
    - elasticsearch
  volumes:
    - ./logstash.conf:/usr/share/logstash/pipeline/logstash.conf
    - $PROJ_DATA/log:/usr/share/logstash/proj_log:ro
  ports:
    - "5044:5044"

I found that running on Ubuntu 20.04 doesn't have this problem. The new log file keeps being read. Maybe running on macOS (arm64) have some bugs?

It looks like this is a Docker issue while running on Mac, not a Logstash one.

You can check this issue about it.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.