Not forwarding filebeat to logstash and no filebeat-* index in kibana

Hi everybody,

i installed elk stack today and configured everything.. to cut long story short: filebeat not forwarding logs as i see it

i have 3 servers:
xx.xx.xx.233 - elasticsearch v 5.1.2 -

[elk@elasticsearch ~]$ sudo curl -X GET 'http://xx.xx.xx.233:9200' {
"name" : "elastic1",
"cluster_name" : "elasticluster",
"cluster_uuid" : "hffEWmKbTzKRerPawQUYoQ",
"version" : {
"number" : "5.1.2",
"build_hash" : "c8c4c16",
"build_date" : "2017-01-11T20:18:39.146Z",
"build_snapshot" : false,
"lucene_version" : "6.3.0"
},
"tagline" : "You Know, for Search"
}

xx.xx.xx.232 - kibana v 5.1.2 - site is working

xx.xx.xx.231 - logstash v 5.1.2 + filebeat v 5.1.2 -

[elk@logstash ~]$ sudo systemctl status logstash
● logstash.service - logstash
Loaded: loaded (/etc/systemd/system/logstash.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2017-01-30 15:42:08 IST; 1min 23s ago

file beat status

[elk@logstash ~]$ sudo systemctl status filebeat
● filebeat.service - filebeat
Loaded: loaded (/usr/lib/systemd/system/filebeat.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2017-01-30 15:44:43 IST; 5s ago

filebeat yml

[elk@logstash ~]$ cat /etc/filebeat/filebeat.yml
- input_type: log
  paths:
     - /var/log/messages
     - /var/log/secure

output.logstash:
  hosts: ["xx.xx.xx.231:5044"]
  ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

logstash conf

[elk@logstash conf.d]$ cat 01-beats-filter.conf
filter {
        if [type] == "syslog" {
                grok {
                        match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
                        add_field => [ "received_at", "%{@timestamp}" ]
                        add_field => [ "received_from", "%{host}" ]
                }
                syslog_pri { }
                date {
                        match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
                }
        }
}

input

[elk@logstash conf.d]$ cat 01-beats-input.conf
input {
        beats {
                port => 5044
                ssl => true
                ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
                ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
        }
}

output

[elk@logstash conf.d]$ cat 01-beats-output.conf
output {
        elasticsearch {
                hosts => ["xx.xx.xx.233:9200"]
                sniffing => true
                manage_template => false
                index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
                document_type => "%{[@metadata][type]}"
        }
}

and here is the problem :frowning: :

and heres a config test:

> [elk@logstash bin]$ sudo ./filebeat -e -c /etc/filebeat/filebeat.yml
> 2017/01/30 14:26:26.456427 beat.go:267: INFO Home path: [/usr/share/filebeat/bin] Config path: [/usr/share/filebeat/bin] Data path: [/usr/share/filebeat/bin/data] Logs path: [/usr/share/filebeat/bin/logs]
> 2017/01/30 14:26:26.456457 beat.go:177: INFO Setup Beat: filebeat; Version: 5.1.2
> 2017/01/30 14:26:26.456601 logp.go:219: INFO Metrics logging every 30s
> 2017/01/30 14:26:26.457359 logstash.go:90: INFO Max Retries set to: 3
> 2017/01/30 14:26:26.457422 outputs.go:106: INFO Activated logstash as output plugin.
> 2017/01/30 14:26:26.457536 publish.go:291: INFO Publisher name: logstash
> 2017/01/30 14:26:26.457747 async.go:63: INFO Flush Interval set to: 1s
> 2017/01/30 14:26:26.457759 async.go:64: INFO Max Bulk Size set to: 2048
> 2017/01/30 14:26:26.457894 beat.go:207: INFO filebeat start running.
> 2017/01/30 14:26:26.458078 registrar.go:85: INFO Registry file set to: /usr/share/filebeat/bin/data/registry
> 2017/01/30 14:26:26.458106 registrar.go:106: INFO Loading registrar data from /usr/share/filebeat/bin/data/registry
> 2017/01/30 14:26:26.479380 registrar.go:123: INFO States Loaded from registrar: 0
> 2017/01/30 14:26:26.479433 crawler.go:34: INFO Loading Prospectors: 1
> 2017/01/30 14:26:26.479484 registrar.go:236: INFO Starting Registrar
> 2017/01/30 14:26:26.479485 prospector_log.go:57: INFO Prospector with previous states loaded: 0
> 2017/01/30 14:26:26.479554 sync.go:41: INFO Start sending events to output
> 2017/01/30 14:26:26.479595 crawler.go:46: INFO Loading Prospectors completed. Number of prospectors: 1
> 2017/01/30 14:26:26.479609 crawler.go:61: INFO All prospectors are initialised and running with 0 states to persist
> 2017/01/30 14:26:26.479631 spooler.go:63: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
> 2017/01/30 14:26:26.479660 prospector.go:111: INFO Starting prospector of type: log
> 2017/01/30 14:26:26.480251 log.go:84: INFO Harvester started for file: /var/log/secure
> 2017/01/30 14:26:26.480253 log.go:84: INFO Harvester started for file: /var/log/messages
> 2017/01/30 14:26:56.456824 logp.go:230: INFO Non-zero metrics in the last 30s: libbeat.publisher.published_events=2046 libbeat.logstash.publish.write_bytes=132 filebeat.harvester.running=2 filebeat.harvester.started=2 filebeat.harvester.open_files=2
> 2017/01/30 14:26:56.502738 single.go:140: ERR Connecting error publishing events (retrying): read tcp 172.16.50.231:35986->172.16.50.231:5044: i/o timeout

i was looking for hours on guides and tried different configs but with no luck.. hope you could help me ?

Is there anything in ES? Use the _cat APIs to check.

hi, its working now. for others who have the same problem:

i installed logstash-beats-plugin
i enabled worker: 1 in the yml file

but: it seemes i can only get 1010 hits in the index
the events stoped flowing and all services are active and runing
can it possible be connected to some yml configuration?

There seems to be a connection problem between filebeat and Logstash:

> 2017/01/30 14:26:56.502738 single.go:140: ERR Connecting error publishing events (retrying): read tcp 172.16.50.231:35986->172.16.50.231:5044: i/o timeout

You should investigate that further.

logstash-input-beats plugin version? btw. 5.2 version has been released recently, this might have a newer version of the input plugin.

Filebeat has a default read timeout of 30 seconds + the input plugin is supposed to send a keep-alive every 5 seconds for the time a batch is being processed. That is, for whatever reason either logstash is not sending the keep-alive (or final ACK), or the keep alive signal never makes it back to filebeat, or the connection has been silently dropped. Reason you see the first few events only is, data has been send and processed by logstash, but logstash did not properly ACK the events being processed to filebeat. Upon restart, filebeat will have to send the same events again.

Also check the logstash logs (in debug mode if possible). Besides potential issues with your network setup, I wonder if LS is overly busy (e.g. forcing a long gc cycle) or facing some problems with it's output.

hey,

after reviewing all of the above, i found out all my configtest was OK and all my services running properly.

my servers has:

16 G RAM
8 CPU
150 GB Storage

but still i see this:

all of my plugins are updated.

i this it has somthing to do wirh the logstash/file beat configuration.

it has only sent logs on the first day of installation

any ideas?

Do you still see the output issues in the filebeat log?

yes, im afraid all of my config is correct tho still no logs is flowing as my last ,msg above

Can you share the log file?

yes, what logs would you like me to send?

here some logs meanwhile: /var/log/filebeat/filebeat.log

2017-02-07T11:16:33+02:00 INFO No non-zero metrics in the last 30s
2017-02-07T11:17:03+02:00 INFO No non-zero metrics in the last 30s
2017-02-07T11:17:06+02:00 ERR Connecting error publishing events (retrying): dial tcp 172.16.50.231:5044: getsockopt: connection refused
2017-02-07T11:17:33+02:00 INFO No non-zero metrics in the last 30s
2017-02-07T11:18:03+02:00 INFO No non-zero metrics in the last 30s
2017-02-07T11:18:06+02:00 ERR Connecting error publishing events (retrying): dial tcp 172.16.50.231:5044: getsockopt: connection refused
2017-02-07T11:18:33+02:00 INFO No non-zero metrics in the last 30s
2017-02-07T11:19:03+02:00 INFO No non-zero metrics in the last 30s
2017-02-07T11:19:06+02:00 ERR Connecting error publishing events (retrying): dial tcp 172.16.50.231:5044: getsockopt: connection refused
2017-02-07T11:19:33+02:00 INFO No non-zero metrics in the last 30s

curl -v --cacert logstash-forwarder.crt https://elastic_host_ip:5044

* About to connect() to 172.16.50.233 port 5044 (#0)
*   Trying 172.16.50.233...
* Connection refused
* Failed connect to 172.16.50.233:5044; Connection refused
* Closing connection 0
curl: (7) Failed connect to 172.16.50.233:5044; Connection refused

That are the ones I was looking for. It seems you still have the connection error. So that is the reason no data ends up in LS / ES.

There are quite a few forum posts with the same problem. Check out these posts to see if they help you solve the problem.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.