Filebeat Need Restart To send Logs

Hallo all,
i was first using filebeat, i have a problem with my filebeat.
i cannot send logs to logstash dynamically?
because in my case, my filebeat not pushing the logs to logstash dynamically.
so, i have to manually restart filebeat each and everytime so as to send the
logs from filebeat to logstash.
So please let me know about this.
Please help me to solve this.

and this is my logstash conf


input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"


output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"


filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGLINE}" }
date {
match => [ "timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]

my file beat config

# List of prospectors to fetch data.
# Each - is a prospector. Below are the prospector specific configurations
  # Paths that should be crawled and fetched. Glob based paths.
  # To fetch all ".log" files from a specific level of subdirectories
  # /var/log/*/*.log can be used.
  # For each file found under this path, a harvester is started.
  # Make sure not file is defined twice as this can lead to unexpected beha$
    - /var/log/*.log
    - /var/log/httpd/*_log

    #- c:\programdata\elasticsearch\logs\*

  # List of root certificates for HTTPS server verifications
  certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

why i know my filebeat didn't send log dynamic because in my access_log there is a log and when i see in my kibana there is nothing, but after i restart my filebeat the log send and i can see it in my kibana.

this is my kibana

is the formatting of you filebeat.yml file correct? It seem to miss any output configuration and even the rest looks pretty off.

that's not all @steffens

this my output

### Logstash as output
# The Logstash hosts
hosts: [""]

# Number of workers per Logstash host.
#worker: 1

# The maximum number of events to bulk into a single batch window. The
# default is 2048.
bulk_max_size: 2048

that's all that i change, and the other still same

input_type: log
document_type: syslog

Can you share your full config file as a gist so we can confirm that the indentation is correct? What do you see in the log files?

Thank you guys, i was done fix my filebeat problem.
i use crontab to make schedule to restart my filebeat.


I still don't understand why you need to restart filebeat.

because i can't send log from client server to logstash in elk server dynamic so i need to restart filebeat.

i use kibana to web interface, i try to open apache in client server and in /var/log/httpd/access_log there is a log but in my kibana didn't show anything. but after i restart my filebeat kibana show the log.

Filebeat is designed to always send the most recent lines to elasticsearch / logstash.

If you restart filebeat, does it send all the new lines once and then stops working until you restart it next time?

hmm yes, so what should i do bro?
because i am newbie about this :frowning:

Best is to check the logs of filebeat and logstash for any additional info on why it is hanging. Can also share details on this question above: "If you restart filebeat, does it send all the new lines once and then stops working until you restart it next time?"

This topic was automatically closed after 21 days. New replies are no longer allowed.