Filebeat is not monitoring logs on centos 7?


(Atharva Pandey) #1

I just installed ELK stack everything seems to be working fine but the filebeat is not showing me logs beyond the date of installation and also it is only pointing to yum.log eventhough I specified inside filebeat.yml

Paths that should be crawled and fetched. Glob based paths.

- /var/log/messages
- /var/log/yum.log
- /var/log/secure

#- /var/log/*
#- c:\programdata\elasticsearch\logs\*

I only see the logs for the date 7th july and no more

Also I am unable to even generate logs when using normal logstash-*

is there something i messed up during installation ?


(Atharva Pandey) #2

Bump


(Rachel Kelly) #3

What do the filebeats and logstash/elasticsearch logs say since Jul 7? Can you paste them in with </>? Is connectivity up between filebeats and logstash/elasticsearch? Does an authenticated curl to the server you're sending to go through?


(Atharva Pandey) #4

@rachelkelly thanks for the response I think I got what the real issue is basically I have to restart filebeat again and again to get the logs on kibana dashboard I don't know though why this is happening ...


(Steffen Siering) #5

Please check the filebeat logs for errors warnings. filebeat requires an ACK from Logstash/Elasticsearch. Due to network failures the ACK might not be received by filebeat. That is, filebeat is thinking something went wrong and normally will retry (given it can reconnect). In your case it might be filebeat publishing events, doesn't get ACKs, while Logstash/Elasticsearch is still processing the most recent events. Having to restart filebeat all over again is not the correct solution. In case you have not enabled (tail_files), you will actually index the very same lines over and over again.


(Atharva Pandey) #6

@steffens

logstash: ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console

Is this the reason the ACK is failing???

filebeat.yml

filebeat.prospectors:

- input_type: log
  paths:
    - /var/log/messages
    - /var/log/ansible

  registry_file: /var/lib/filebeat/registry
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["localhost:9200"]


output.logstash:
  # The Logstash hosts
  hosts: ["localhost:5044"]

I did enable tail_files:true once and then commented it later as suggested by someone in other post so that there is no data loss it said that after enabling it once it still continues to track the file


(Atharva Pandey) #7

I also tried with filebeat on client side with configurations as below but its not even connecting I guess to elasticsearch from what i have read I assume that on the client machine I only need to install filebeat and add the IP in the logstash host of the host machine where my elk setup is and it should automatically take it and the logs should fetch however it is not even connecting now I assume this and the above might be interrelated ... In the above example filebeat was installed on the elk machine itself :

 filebeat:
prospectors:
-
paths:
- /var/log/secure
- /var/log/messages
#  - /var/log/*.loginput_type: log
document_type: syslog
registry_file: /var/lib/filebeat/registry
output:
    logstash:
       hosts: ["elk_server_private_ip:5044"]
       bulk_max_size: 1024tls:
       certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
shipper:logging:
    files:
       rotateeverybytes: 10485760 # = 10MB

Please ignore indentation its indented properly in the original file


(Steffen Siering) #8
  1. the log message is on logstash startup... We can not tell ACK being potentially lost from logstash. In the end it is filebeat missing an answer. Check the filebeat logs.

  2. Yup, tail_files will result in data loss. With tail-files, upon each restart, filebeat will start from the end of the file.

  3. you config file looks faulty (not properly formatted).

  4. which filebeat/logstash version are you using? In 5.x the tls setting has been renamed to ssl.


(Atharva Pandey) #9
2017-07-21T07:14:59-07:00 INFO Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2017-07-21T07:14:59-07:00 INFO Setup Beat: filebeat; Version: 5.5.0
2017-07-21T07:14:59-07:00 INFO Loading template enabled. Reading template file: /etc/filebeat/filebeat.template.json
2017-07-21T07:14:59-07:00 INFO Metrics logging every 30s
2017-07-21T07:14:59-07:00 INFO Loading template enabled for Elasticsearch 2.x. Reading template file: /etc/filebeat/filebeat.template-es2x.json
2017-07-21T07:14:59-07:00 INFO Loading template enabled for Elasticsearch 6.x. Reading template file: /etc/filebeat/filebeat.template-es6x.json
2017-07-21T07:14:59-07:00 INFO Elasticsearch url: http://localhost:9200
2017-07-21T07:14:59-07:00 INFO Activated elasticsearch as output plugin.
2017-07-21T07:14:59-07:00 INFO Max Retries set to: 3
2017-07-21T07:14:59-07:00 INFO Activated logstash as output plugin.
2017-07-21T07:14:59-07:00 INFO Publisher name: lin-elk-devops.ams.com
2017-07-21T07:14:59-07:00 INFO Flush Interval set to: 1s
2017-07-21T07:14:59-07:00 INFO Max Bulk Size set to: 50
2017-07-21T07:14:59-07:00 INFO Flush Interval set to: 1s
2017-07-21T07:14:59-07:00 INFO Max Bulk Size set to: 2048
2017-07-21T07:14:59-07:00 INFO filebeat start running.
2017-07-21T07:14:59-07:00 INFO Registry file set to: /var/lib/filebeat/registry
2017-07-21T07:14:59-07:00 INFO Loading registrar data from /var/lib/filebeat/registry
2017-07-21T07:14:59-07:00 INFO States Loaded from registrar: 1
2017-07-21T07:14:59-07:00 INFO Loading Prospectors: 1
2017-07-21T07:14:59-07:00 INFO Prospector with previous states loaded: 1
2017-07-21T07:14:59-07:00 INFO Starting prospector of type: log; id: 14216641240280595657
2017-07-21T07:14:59-07:00 INFO Loading and starting Prospectors completed. Enabled prospectors: 1
2017-07-21T07:14:59-07:00 INFO Starting Registrar
2017-07-21T07:14:59-07:00 INFO Start sending events to output
2017-07-21T07:14:59-07:00 INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2017-07-21T07:14:59-07:00 INFO Harvester started for file: /var/log/messages
2017-07-21T07:14:59-07:00 ERR Connecting error publishing events (retrying): dial tcp 127.0.0.1:5044: getsockopt: connection refused
2017-07-21T07:14:59-07:00 INFO Connected to Elasticsearch version 5.5.0
2017-07-21T07:14:59-07:00 INFO Trying to load template for client: http://localhost:9200
2017-07-21T07:14:59-07:00 INFO Template already exists and will not be overwritten.
2017-07-21T07:15:00-07:00 ERR Connecting error publishing events (retrying): dial tcp 127.0.0.1:5044: getsockopt: connection refused
2017-07-21T07:15:02-07:00 ERR Connecting error publishing events (retrying): dial tcp 127.0.0.1:5044: getsockopt: connection refused
2017-07-21T07:15:06-07:00 ERR Connecting error publishing events (retrying): dial tcp 127.0.0.1:5044: getsockopt: connection refused
2017-07-21T07:15:14-07:00 ERR Connecting error publishing events (retrying): dial tcp 127.0.0.1:5044: getsockopt: connection refused
2017-07-21T07:15:29-07:00 INFO Non-zero metrics in the last 30s: filebeat.harvester.open_files=1 filebeat.harvester.running=1 filebeat.harvester.started=1 libbeat.es.call_count.PublishEvents=41 libbeat.es.publish.read_bytes=21010 libbeat.es.publish.write_bytes=1058610 libbeat.es.published_and_acked_events=2046 libbeat.publisher.published_events=2046
2017-07-21T07:15:30-07:00 ERR Connecting error publishing events (retrying): dial tcp 127.0.0.1:5044: getsockopt: connection refused
2017-07-21T07:15:59-07:00 INFO No non-zero metrics in the last 30s
2017-07-21T07:16:02-07:00 ERR Connecting error publishing events (retrying): dial tcp [::1]:5044: getsockopt: connection refused

The above are filebeat logs for my local system

i was just lazy to mark proper indentation however my config files indentation is proper on my system so there is no issue with that

yes I am using version 5.5 however I totally removed this ssl tls stuff commented every security stuff but the problem is the logs are pushed only when I restart the system

for client thing i guess there is some security or firewall issue which is preventing that client to send logs as i tested on azure vm and it was going fine but then again i had to restart the filebeat again and again to send the logs

I believe the problem is somewhere with logstash I missed some configuration or something i dont know


(Steffen Siering) #10

You sure you received any new logs? This error message tells me filebeat can not even connect to logstash: Connecting error publishing events (retrying): dial tcp 127.0.0.1:5044: getsockopt: connection refused. Normally one gets this message if the service is not running or 'protected' by a firewall disabling connection attempts.


(Atharva Pandey) #11

I am sure i did recieve new logs also I checked logstash status it is running well and fine .... I understand if there is an firewall issue in my client machine but when I am trying to run filebeat on my own machine then also it is running the same way which is kind of weired because there i have already disabled all the security stuff for testing pupose .............


(Steffen Siering) #12

but when I am trying to run filebeat on my own machine then also it is running the same way

Which machine is filebeat exactly running on? It's on the host logstash itself is running?
You mean by "same way" that filebeat still can not connect?

Why did you configure filebeat to connect to Elasticsearch AND Logstash? Interestingly from logs it seems filebeat does connect to Elasticsearch, but not logstash. Can you share your logstash input configuration as well?


(Atharva Pandey) #13

Input :

   input {
    beats {
    port => 5044
    }
    }

Ok I tried with a seperate RedHat machine where only filebeat is installed and also I installed filebeat on the host elk server machine as well for testing to see if things a re working

I am not sure what you mean by why did you configure filebeat to connect to elasticsearch and logstash isn't that how it should be working..

output :-

output {
elasticsearch {
hosts => ["172.18.6.12:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}

filter :-

filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}

(Andrew Kroh) #14

Filebeat should deliver its data directly to either Logstash OR Elasticsearch, not both. In the filebeat.yml you have both outputs enabled. If you are going to send the data to Logstash to be further processed then remove/disable/comment-out the elasticsearch output in the filebeat.yml.


(Atharva Pandey) #15

I tried different configurations initially i had only one output i.e for logstash later i made it like this but my issue was not fixed also note @steffens i installed the elk server referencing this blog :--

How ever I have removed the SSL auth now since it was creating a lot of issues I will deal with it later as you can see in my logstash config


(Steffen Siering) #16

Having 2 outputs creates some indirect coupling. If one output is blocked, log-lines can not be ACKed and internal queues will run fill without publishing any new events to any output. In 6.0.0-beta1 we removed the ability to configure multiple outputs.

Unfortunately I find it quite hard to follow/understand your setup and things your changing here and there. We're having a getting started guide for filebeat, but just from reading some other blog-post I can not tell anything about all components being setup correctly.

Let's start over with the most simples configuration and verify filebeat->logstash configuration only for now. We're going to add the other components (TLS, logstash filters, elasticsearch output, kibana) in the next steps.

Let's start with filebeat and logstash running on the same machine. For filebeat use:

filebeat.prospectors:
- input_type: log
  paths:
    - /var/log/messages
filebeat.registry_file: /var/lib/filebeat/registry

output.logstash:
  # The Logstash hosts
  hosts: ["localhost:5044"]

and for logstash use:

input {
  beats { 
    port => 5044
  }
}

output {
    stdout { codec => rubydebug }
}

This will configure filebeat to forward the logs to logstash and logstash will outputs events (without modifying them) to stdout.

Delete the registry file in filebeat between each run.

The test should forward only log file for now. Check the events being written (check registry) matches the actual file size.

Once this works, use the same filebeat configuration on the remote host.

For each host/test, please post the host/OS name of the filebeat machine (local vs. remote will do) + output of wc /var/log/messages and the registry file. If something went wrong (ERR,WARN messages in filebeat or logstash), also attach the logs.


(Atharva Pandey) #17

This doesn't forward anything in KIBANA atleast ...

TEST CASE 1 HOST ELK - CENTOS FILEBEAT MACHINE LOCAL CENTOS

No change in registry file size after restarting the filebeat again and again

/var/log/messages

Jul 23 06:31:47 lin-elk-devops systemd: Started logstash.
Jul 23 06:31:47 lin-elk-devops systemd: Starting logstash...
Jul 23 06:31:57 lin-elk-devops logstash: ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
Jul 23 06:31:57 lin-elk-devops logstash: Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Jul 23 06:31:58 lin-elk-devops systemd: logstash.service: main process exited, code=exited, status=1/FAILURE
Jul 23 06:31:58 lin-elk-devops systemd: Unit logstash.service entered failed state.
Jul 23 06:31:58 lin-elk-devops systemd: logstash.service failed.

When checking via journal ::

Jul 21 23:27:53 lin-elk-devops.ams.com systemd[1]: Started logstash.
Jul 21 23:27:53 lin-elk-devops.ams.com systemd[1]: Starting logstash...
Jul 21 23:28:04 lin-elk-devops.ams.com logstash[14286]: ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the c
Jul 21 23:28:05 lin-elk-devops.ams.com logstash[14286]: Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Jul 21 23:28:05 lin-elk-devops.ams.com systemd[1]: logstash.service: main process exited, code=exited, status=1/FAILURE
Jul 21 23:28:05 lin-elk-devops.ams.com systemd[1]: Unit logstash.service entered failed state.
Jul 21 23:28:05 lin-elk-devops.ams.com systemd[1]: logstash.service failed.
Jul 21 23:28:05 lin-elk-devops.ams.com systemd[1]: logstash.service holdoff time over, scheduling restart 

curl -XGET 'http://localhost:9200/logstash-*/_search?pretty'

{
  "took" : 0,
  "timed_out" : false,
  "_shards" : {
    "total" : 0,
    "successful" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : 0,
    "max_score" : 0.0,
    "hits" : [ ]
  }
}

sudo bin/logstash --path.settings=/etc/logstash/logstash.yml -f /etc/logstash/conf.d/test.conf tried to run manually to debug

ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path //etc/logstash/logstash.yml/log4j2.properties. Using default config which logs to console
10:53:41.131 [[main]-pipeline-manager] INFO  logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
10:53:41.875 [[main]-pipeline-manager] INFO  logstash.inputs.beats - Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
10:53:41.974 [[main]-pipeline-manager] INFO  logstash.pipeline - Pipeline main started
10:53:42.121 [Api Webserver] INFO  logstash.agent - Successfully started Logstash API endpoint {:port=>9600}

(Steffen Siering) #19

Uhm... my test configs is not about sending data to elasticsearch -> no data in kibana. It's only for testing filebeat->logstash connectivity by writing test output to console. That is, don't start logstash as a service, but as a normal process via logstash -f <config file>.

The error message indicates logstash not starting. Which config are you using? Staring logstash via systemd with stdout might not work.


(Atharva Pandey) #20

@steffens ahh sorry haha I thought you are telling me this way everything will start :stuck_out_tongue: haha ... anyways .. I starting logstash via sudo systemctl start logstash and sudo systemctl start filebeat , and the config you wrote I am using that config itself


(Steffen Siering) #21

when testing with my configs, don't use systemd to start logstash/filebeat as a service. Have them run in foreground.