Filebeat not sending new logs to ELK server untill restart


(V Sai Ram) #1

Hi,

My filebeat version : filebeat version 1.2.3

My Filebeat.yml is pretty straighforward :

filebeat:
  prospectors:
    -
      paths:
        - /var/log/messages
        - /var/log/mesos/*
      input_type: log
      document_type: log
      scan_frequency: 10s
  registry_file: /var/lib/filebeat/registry
output:
  elasticsearch:
    hosts: ["localhost:9200"]
  logstash:
    hosts: ["10.157.6.241:5044"]
    bulk_max_size: 1024
    tls:
       certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
       insecure: true
shipper:
logging:
  files:
      rotateeverybytes: 10485760

Am not able to see the "new" logs in the Elastic,logstash,Kibana server( which I hosted in another instance) till I restart this filebeat service on the running instance. Please let me know what I have to do. I am running on centos 7.2


(ruflin) #2

If you modify the config file when filebeat is already running, your config changes will not have any effect and filebeat needs to be restarted. So this is the expected behaviour.

Or were you referring to that no new logs are sent?


(V Sai Ram) #3

I am referring to the new logs that have to be sent from /var/log/messages. Let me know if I have to give more details regarding the set up. [ Edited the question ]


(Steffen Siering) #4

why have you configured both, elasticsearch and logstash output?

Can you share filebeat debug logs?


(V Sai Ram) #5

2016/09/08 11:41:10.600627 util.go:20: DBG full line read
2016/09/08 11:41:10.600695 util.go:20: DBG full line read
2016/09/08 11:41:10.600770 reader.go:138: DBG End of file reached: /var/log/messages; Backoff now.
2016/09/08 11:41:11.601025 reader.go:138: DBG End of file reached: /var/log/messages; Backoff now.
2016/09/08 11:41:12.589422 prospector.go:185: DBG Start next scan
2016/09/08 11:41:12.589470 prospector.go:261: DBG scan path /var/log/messages
2016/09/08 11:41:12.589528 prospector.go:275: DBG Check file for harvesting: /var/log/messages
2016/09/08 11:41:12.589545 registrar.go:175: DBG Same file as before found. Fetch the state.
2016/09/08 11:41:12.589564 prospector.go:418: DBG Update existing file for harvesting: /var/log/messages
2016/09/08 11:41:12.589576 prospector.go:465: DBG Not harvesting, file didn't change: /var/log/messages
2016/09/08 11:41:13.601260 reader.go:138: DBG End of file reached: /var/log/messages; Backoff now.
2016/09/08 11:41:17.601478 reader.go:138: DBG End of file reached: /var/log/messages; Backoff now.
2016/09/08 11:41:22.589770 prospector.go:185: DBG Start next scan
2016/09/08 11:41:22.589818 prospector.go:261: DBG scan path /var/log/messages
2016/09/08 11:41:22.589849 prospector.go:275: DBG Check file for harvesting: /var/log/messages
2016/09/08 11:41:22.589871 registrar.go:175: DBG Same file as before found. Fetch the state.
2016/09/08 11:41:22.589884 prospector.go:418: DBG Update existing file for harvesting: /var/log/messages
2016/09/08 11:41:22.589894 prospector.go:465: DBG Not harvesting, file didn't change: /var/log/messages
2016/09/08 11:41:25.601956 util.go:20: DBG full line read
2016/09/08 11:41:25.602013 reader.go:138: DBG End of file reached: /var/log/messages; Backoff now.
2016/09/08 11:41:26.602258 reader.go:138: DBG End of file reached: /var/log/messages; Backoff now.
2016/09/08 11:41:28.144336 client.go:297: DBG ES Ping(url=http://localhost:9200, timeout=1m30s)
2016/09/08 11:41:28.145105 client.go:302: DBG Ping request failed with: Head http://localhost:9200: dial tcp [::1]:9200: getsockopt: connection refused
2016/09/08 11:41:28.145123 single.go:126: INFO Connecting error publishing events (retrying): Head http://localhost:9200: dial tcp [::1]:9200: getsockopt: connection refused
2016/09/08 11:41:28.145129 single.go:152: INFO send fail
2016/09/08 11:41:28.145136 single.go:159: INFO backoff retry: 1m0s
2016/09/08 11:41:28.602547 reader.go:138: DBG End of file reached: /var/log/messages; Backoff now.
2016/09/08 11:41:32.590180 prospector.go:185: DBG Start next scan
2016/09/08 11:41:32.590250 prospector.go:261: DBG scan path /var/log/messages
2016/09/08 11:41:32.590276 prospector.go:275: DBG Check file for harvesting: /var/log/messages
2016/09/08 11:41:32.590291 registrar.go:175: DBG Same file as before found. Fetch the state.
2016/09/08 11:41:32.590300 prospector.go:418: DBG Update existing file for harvesting: /var/log/messages
2016/09/08 11:41:32.590311 prospector.go:465: DBG Not harvesting, file didn't change: /var/log/messages
2016/09/08 11:41:32.602788 reader.go:138: DBG End of file reached: /var/log/messages; Backoff now.
2016/09/08 11:41:40.603044 reader.go:138: DBG End of file reached: /var/log/messages; Backoff now.
2016/09/08 11:41:42.590475 prospector.go:185: DBG Start next scan
2016/09/08 11:41:42.590557 prospector.go:261: DBG scan path /var/log/messages
2016/09/08 11:41:42.590593 prospector.go:275: DBG Check file for harvesting: /var/log/messages
2016/09/08 11:41:42.590614 registrar.go:175: DBG Same file as before found. Fetch the state.
2016/09/08 11:41:42.590629 prospector.go:418: DBG Update existing file for harvesting: /var/log/messages
2016/09/08 11:41:42.590649 prospector.go:465: DBG Not harvesting, file didn't change: /var/log/messages
2016/09/08 11:41:50.603354 reader.go:138: DBG End of file reached: /var/log/messages; Backoff now.
2016/09/08 11:41:52.590813 prospector.go:185: DBG Start next scan
2016/09/08 11:41:52.590874 prospector.go:261: DBG scan path /var/log/messages
2016/09/08 11:41:52.590904 prospector.go:275: DBG Check file for harvesting: /var/log/messages
2016/09/08 11:41:52.590920 registrar.go:175: DBG Same file as before found. Fetch the state.
2016/09/08 11:41:52.590931 prospector.go:418: DBG Update existing file for harvesting: /var/log/messages
2016/09/08 11:41:52.590942 prospector.go:465: DBG Not harvesting, file didn't change: /var/log/messages
2016/09/08 11:42:00.603687 util.go:20: DBG full line read


(V Sai Ram) #6

Cool. Found the problem. I removed the section of the other output which I dont require( the elastic one). I am able to see the new logs now. Thanks.


(V Sai Ram) #7

Hi Team,

I have added a new node and installed the filebeat. It needs to send the logs from the node to the logstash server installed another cloud. Am not able to see the logs in the Kibana GUI.

Here is the straight forward filebeat.yml in the new node :

filebeat:
prospectors:
-
paths:
- /ephemeral/logs/job-server/*.log
input_type: log
document_type: log
scan_frequency: 10s
registry_file: /var/lib/filebeat/registry
output:
logstash:
hosts: ["10.157.6.241:5044"]
bulk_max_size: 1024
tls:
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
insecure: true
shipper:
logging:
files:
rotateeverybytes: 10485760

With this filebeat.yml in place, I am pasting the below debug logs for reference :

2016/09/19 11:29:57.083112 client.go:100: DBG connect
2016/09/19 11:29:57.083925 client.go:146: DBG Try to publish 1024 events to logstash with window size 5
2016/09/19 11:29:57.086134 client.go:105: DBG close connection
2016/09/19 11:29:57.086208 client.go:124: DBG 0 events out of 1024 events sent to logstash. Continue sending ...
2016/09/19 11:29:57.086225 single.go:77: INFO Error publishing events (retrying): EOF
2016/09/19 11:29:57.086240 single.go:154: INFO send fail
2016/09/19 11:29:57.086249 single.go:161: INFO backoff retry: 2s
2016/09/19 11:29:59.086804 client.go:100: DBG connect
2016/09/19 11:29:59.087764 client.go:146: DBG Try to publish 1024 events to logstash with window size 2
2016/09/19 11:29:59.093366 client.go:105: DBG close connection
2016/09/19 11:29:59.093555 client.go:124: DBG 0 events out of 1024 events sent to logstash. Continue sending ...
2016/09/19 11:29:59.093905 single.go:77: INFO Error publishing events (retrying): EOF
2016/09/19 11:29:59.093916 single.go:154: INFO send fail
2016/09/19 11:29:59.093927 single.go:161: INFO backoff retry: 4s
2016/09/19 11:30:03.094124 client.go:100: DBG connect
2016/09/19 11:30:03.094933 client.go:146: DBG Try to publish 1024 events to logstash with window size 1
2016/09/19 11:30:03.097109 client.go:105: DBG close connection
2016/09/19 11:30:03.097158 client.go:124: DBG 0 events out of 1024 events sent to logstash. Continue sending ...
2016/09/19 11:30:03.097172 single.go:77: INFO Error publishing events (retrying): EOF
2016/09/19 11:30:03.097179 single.go:154: INFO send fail
2016/09/19 11:30:03.097187 single.go:161: INFO backoff retry: 8s
2016/09/19 11:30:05.257910 prospector.go:185: DBG Start next scan
2016/09/19 11:30:05.257963 prospector.go:261: DBG scan path /ephemeral/log/job-server/*.log
2016/09/19 11:30:05.258303 prospector.go:275: DBG Check file for harvesting: /ephemeral/log/job-server/spark-job-server.log

Let me know if anything is required from my side.


(Steffen Siering) #8

Is your output + logstash configured correctly? Anything in logs? Any devices (proxy, firewal, NAT) in between? The underlying TCP connection is closed by someone.


(V Sai Ram) #9

Can you suggest any options on how to check the above mentioned questions. when I output the logs to locahost file, am able to see the logs getting parsed like below :
{"@timestamp":"2016-09-19T15:35:22.271Z","beat":{"hostname":"spark-analytics-1","name":"spark-analytics-1"},"count":1,"fields":null,"input_type":"log","message":"[2016-09-01 22:11:00,879] INFO Cluster(akka://JobServer) [] [Cluster(akka://JobServer)] - Cluster Node [akka.tcp://JobServer@127.0.0.1:40556] - Leader is moving node [akka.tcp://JobServer@127.0.0.1:46120] to [Up]","offset":82020,"source":"/ephemeral/log/job-server/spark-job-server.log","type":"log"}
{"@timestamp":"2016-09-19T15:35:22.271Z","beat":{"hostname":"spark-analytics-1","name":"spark-analytics-1"},"count":1,"fields":null,"input_type":"log","message":"[2016-09-01 22:11:00,993] INFO AkkaClusterSupervisorActor [] [akka://JobServer/user/context-supervisor] - Received identify response, attempting to initialize context at akka.tcp://JobServer@127.0.0.1:46120/user/*","offset":82234,"source":"/ephemeral/log/job-server/spark-job-server.log","type":"log"}


(Steffen Siering) #10

these logs are totally not related to the EOF.

Maybe let's start with telnet 10.157.6.241 5044 first. If you can connect let's continue with check TLS functioning correctly via curl or openssl s_client. Sometime connections are just dropped silently (stupid SSL libs) if some validation fails during SSL handshake.


(V Sai Ram) #11

telnet 10.157.6.241 5044
Trying 10.157.6.241...
Connected to 10.157.6.241.
Escape character is '^]'.

Telnet is working

I disabled the TLS. ( by commenting the tls ection in filebeat.yml). Still the debug logs are looking same.


(Steffen Siering) #12

tls must be enabled (or disabled) on both endpoints.


(V Sai Ram) #13

Other end meaning at the logstash server?, if so, I have not given any options of tls in the input conf of logstash.

[root@coord-1 conf.d]# cat beats-input.conf
input {
beats {
port => 5044
}
}
[root@coord-1 conf.d]#


(system) #14

This topic was automatically closed after 21 days. New replies are no longer allowed.