Once in a while, filebeat lost first charactor while collect log

Once in a while, filebeat lost first charactor while collect log,for example,message start with timestamp string,but it lost some charactors when I get message from elasticsearch.


In fact message is start with string "2019-01-20 05:28:21,343Z",and process print log without any error,message is complete in log file.
filebeat collect log,and send to logstash,and logstash send to elasticsearch.
please tell me some problem happened,how to solve this problem.

Can you share your filebeat config?

Do you do any processing in logstash? Is there a chance the character can get lost in logstash?

OK.as follow:

- input_type: log
  enabled: true
  paths:
    - /opt/iot/cig/karaf/data/log/*.log
  fields:
    iotteam: vehiclesuit
    iotservice: cig
    logtype: cig-log
  fields_under_root: true
  max_bytes: 512000
  close_timeout: 300m
  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
  multiline.negate: true
  multiline.match: after
  spool_size: 1024
  idle_timeout: 10s
  registry_file: registry

Yes,I catch timestamp of log and cover timestamp of filebeat. filter config of logstash:

    grok {
      match => { "message" => "%{TIMESTAMP_ISO8601:logTimestamp}" }
    }
    date { match => ["logTimestamp", "ISO8601"] }

Please format logs and configs using the </> button in the editor.

Which beat version are you using?

Can you share you complete config file. The one you posted doesn't seem either complete or correct.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.