Filebeat, File was truncated. Begin reading file from offset 0

Filebeat version: 5.1.2
OS: Debian Jessie

I am trying to set up an IIS log dashboard for the company i work for.
So i set up a Filebeat>Logstash>Elasticsearch>Grafana server.
For a few days i thought everything was working as intended. I got pretty graphs, but after actually analyzing the data and logs i found out the data is duplicating. At the start of each day or hour (depending on what i set the IIS log schedule on) i get correct data, but after 20 minutes the data duplicates.at that point i get the following message in my logs.

2017-02-09T15:20:12+01:00 INFO File was truncated. Begin reading file from offset 0: /

This repeats every 20-30 minutes.

Everything i tried so far is comming up short.

What is the log rotation algorithm you are using?
Can you share your config file?

We are using the following logging settings in IIS.


We share that folder over the network and mount it to /home/Domain/
The schedule was set to daily before.
And this is the config file we are using for filebeat.

http://pastebin.com/Q2FPynBR

So the log volume you are reading from is a shared volume? We strongly recommend not to use network voumes and install filebeat directly on the edge nodes.

I have the same problem. I'am using filebeat on a Raspberry PI and try to collect the logs from network volumes. My access to the edge nodes is very limited, but logs are shared on this volumes. Any hints on my config?

`filebeat.prospectors:

  • input_type: log

    paths:

    • "/home/dataloggr/pc/172_22_1_81/logs/Tasks/*.log"

    exclude_files:

    • "StatePortLog*.*"
    • "ApplicationServer*.*"
    • "jvm_gctrace.*"
    • ".lck"
    • "osgi.*"

    multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
    multiline.negate: true
    multiline.match: after

tail_files: true
name: 0000000001de8394-beats

output.kafka:
enabled: true
hosts:
- "hostname1:9092"
- "hostname2:9092"

topic: TEST_TOPIC
version: 0.10.0
worker: 4
max_retries: -1
compression: none
required_acks: 1
flush_interval: 1s
client_id: 0000000001de8394

path.data: /var/filebeat/data

logging.to_files: true
logging.files:
path: /var/log/filebeat
name: filebeat`

@chris060986 What do you mean exactly by the same problem? Do you also get file truncated messages?

Hi,
yes I also get file truncated messages, but more often than 20 to 30 minutes. I've done some investigation on this problem and I figure out the problem isn't filebeat. I think its the way the volume is mounted. Also with the unix command tail -f or tail -F I received the file truncated message and when I pipe the output auf tail in a file there are also duplicates in it. Without rotating the log files or anything else.

So maybe its the implementation of the network protocol or some inconsistence between unix filesystem (where fb is running) and windows (where logs are written).

@chris060986 What is the shared file system you are using? Do you have a flaky network connection?

@ruflin
The network connection is very stable. Both, the log producer and the log-collector are in the same local 1gb-ethernet. The producer is a windows XP system and the logs are mounted into a raspian system via samba. The mounted volume is read-only.

Shared file systems are tricky, as sometimes content is cached and other interesting wiered things. That is why we strongly recommend to install filebeat on the edge nodes. Can you try to install it directly on the host machine?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.