Filebeat sends logs with delay

I send logs from filebeat to elasticsearch directly . the service that i use generates logs every second . most of the time it works properly but sometimes the logs reach to elastic after 15 minutes or later and I don't know why .
this is my config in filebet :

filebeat. inputs:
-type: log
id: my-filestream-id
enabled: true
backoff: 0.5s
scan frequency: 15
close inactive: 10m
ignore older: 30m
flush.min events: 0
paths:
- /var/log/*.log

Hi @behzad_alipoor,

There are some issues with your configuration, aside the fact it lost its indentation, so it's a bit hard to reason about it. Whenever posting configuration/code blocks, use the "Preformatted Text" option.

Logs delayed to be searchable in Kibana/Elasticsearch might indicate the Elasticsearch nodes are overloaded and not being able to index the new documents quick enough. Do you have any idea of our throughput when you start noticing a delay on ingestion?

You mentioned the service generates logs almost every second, are those logs added to the same file? Is there any log rotation strategy?

Assuming your YAML config is:

filebeat. inputs:
  - type: log
    id: my-filestream-id
    enabled: true
    backoff: 0.5s
    scan frequency: 15
    close inactive: 10m
    ignore older: 30m
    flush.min events: 0
    paths:
      - /var/log/*.log

close inactive and ignore older are misspelled, the words are separated by a _, so the correct format is:

    close_inactive: 10m
    ignore_older: 30m

flush.min events is not part of the log input configuration, so it's being ignored. There is a flush.min_events configuration in the memory queue, is that what you meant to set?

The default configurations generally provide a near-real-time ingestion of the files, unless you have very high throughput.

One last note, the log input has been deprecated a while ago, it's recommended to use the filestream input now, it has got more features and better performance.