Filebeat cant send large log data to es

Now at product env. When I restart es(filebeat is still running), I will meet an issue that filebeat doesn't send the log info to es. one line log is too large(7K) may cause this issue?

the filebeat.yml

  • type: log
    ignore_older: 48h
    close_inactive: 2h
    close_timeout: 10m
    clean_inactive: 72h

  • output.elasticsearch:
    max_retries: 2
    timeout: 300

Version: filebeat 6.2.3
Operating System: CentOS release 6.8

7KB?

What do you see in the logs or std out to denote filebeat is unable to send to ES?
Has the output to elasticsearch such as host and port been uncommented and set correctly?

logs don't denote some errors. And output setting is correct. Filebeat send new access logs with small size successfully. I think less max_retries may be this issue's reason, I will set max_retries a large number, hoping that works.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.