Connect filebeat to logstash

Hi!

created openssl keys and copied to filebeat host in /etc/ssl/ folder:

[root@fbeat ssl]# ls -l
total 8
lrwxrwxrwx. 1 root root   16 Oct 23 20:02 certs -> ../pki/tls/certs
-rw-r--r--. 1 root root 1704 Nov 28 23:46 logstash-forwarder.key
-rw-r--r--. 1 root root 1241 Nov 28 23:47 logstash_frwrd.crt

the same ssl keys on the host with logstash in /etc/ssl/folder:

-rw-r--r--  1 root root 1704 Nov 28 22:25 logstash-forwarder.key
-rw-r--r--  1 root root 1241 Nov 28 22:25 logstash_frwrd.crt

filebeat config:

filebeat.inputs:
- type: filestream
  id: my-filestream-id
  enabled: true
  paths:
    - /var/log/*.log
    - /var/log/nginx/access.log
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 1
setup.kibana:
output.logstash:
  hosts: ["192.168.0.61:5044"]
  tls:
     certificate_authorities: ["/etc/ssl/logstash_frwrd.crt"]
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

logstash config:

input {
	beats {
		port => 5044
		ssl => true
		ssl_certificate => "/etc/ssl/logstash_frwrd.crt"
		ssl_key => "/etc/ssl/logstash-forwarder.key"
		}
}
filter {
	if [type] == "syslog" {
				grok {
					match=>{ "message" => "%{SYSLOGLINE}" }
                                     }
				date {
					match => [ "timestamp", "MMM d HH:mm:ss",  "MMM dd HH:mm:ss" ]
	     			     }
			      }
	}
output {
	elasticsearch {
			hosts => ["https://localhost:9200"]
			index => "filebeat"
			cacert => "/etc/logstash/certs/http_ca.crt"
		      }
	stdout {
       	       }
       }

when I start filebeat I got this output in systemctl status:

ilebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch.
   Loaded: loaded (/usr/lib/systemd/system/filebeat.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2022-12-02 16:20:11 MST; 6s ago
     Docs: https://www.elastic.co/beats/filebeat
 Main PID: 12540 (filebeat)
    Tasks: 20
   CGroup: /system.slice/filebeat.service
           └─12540 /usr/share/filebeat/bin/filebeat --environment systemd -c /etc/filebeat/filebeat.yml --path.home /usr/share/filebeat --pa...

Dec 02 16:20:15 fbeat filebeat[12540]: {"log.level":"info","@timestamp":"2022-12-02T16:20:15.208-0700","log.logger":"crawler","log.o..."1.6.0"}
Dec 02 16:20:15 fbeat filebeat[12540]: {"log.level":"info","@timestamp":"2022-12-02T16:20:15.208-0700","log.logger":"crawler","log.o..."1.6.0"}
Dec 02 16:20:15 fbeat filebeat[12540]: {"log.level":"info","@timestamp":"2022-12-02T16:20:15.211-0700","log.logger":"input.filestrea..."1.6.0"}
Dec 02 16:20:15 fbeat filebeat[12540]: {"log.level":"info","@timestamp":"2022-12-02T16:20:15.211-0700","log.origin":{"file.name":"cf..."1.6.0"}
Dec 02 16:20:15 fbeat filebeat[12540]: {"log.level":"info","@timestamp":"2022-12-02T16:20:15.212-0700","log.origin":{"file.name":"cf..."1.6.0"}
Dec 02 16:20:18 fbeat filebeat[12540]: {"log.level":"info","@timestamp":"2022-12-02T16:20:18.191-0700","log.logger":"add_cloud_metadata","lo...
Dec 02 16:20:18 fbeat filebeat[12540]: {"log.level":"info","@timestamp":"2022-12-02T16:20:18.265-0700","log.logger":"publisher_pipel..."1.6.0"}
Dec 02 16:20:18 fbeat filebeat[12540]: {"log.level":"info","@timestamp":"2022-12-02T16:20:18.265-0700","log.logger":"publisher_pipeline_outp...
Dec 02 16:20:18 fbeat filebeat[12540]: {"log.level":"error","@timestamp":"2022-12-02T16:20:18.291-0700","log.logger":"logstash","log.origin"...
Dec 02 16:20:18 fbeat filebeat[12540]: {"log.level":"error","@timestamp":"2022-12-02T16:20:18.315-0700","log.logger":"logstash","log..."1.6.0"}

pay attention to last to lines with errors

[root@fbeat filebeat]# journalctl -f -u filebeat I can see these messages

Dec 02 16:38:13 fbeat filebeat[12540]: {"log.level":"info","@timestamp":"2022-12-02T16:38:13.388-0700","log.logger":"publisher_pipeline_output","log.origin":{"file.name":"pipeline/client_worker.go","file.line":147},"message":"Connection to backoff(async(tcp://192.168.0.61:5044)) established","service.name":"filebeat","ecs.version":"1.6.0"}
Dec 02 16:38:13 fbeat filebeat[12540]: {"log.level":"error","@timestamp":"2022-12-02T16:38:13.487-0700","log.logger":"logstash","log.origin":{"file.name":"logstash/async.go","file.line":280},"message":"Failed to publish events caused by: write tcp 192.168.0.60:53138->192.168.0.61:5044: write: connection reset by peer","service.name":"filebeat","ecs.version":"1.6.0"}
Dec 02 16:38:14 fbeat filebeat[12540]: {"log.level":"error","@timestamp":"2022-12-02T16:38:14.991-0700","log.logger":"publisher_pipeline_output","log.origin":{"file.name":"pipeline/client_worker.go","file.line":176},"message":"failed to publish events: write tcp 192.168.0.60:53138->192.168.0.61:5044: write: connection reset by peer","service.name":"filebeat","ecs.version":"1.6.0"}

looks like logstash does not accept connection

My configuration is very basic Can you tell me what's wrong with it?

A general comment - please put your topics in the best category for the product you are using :slight_smile:

which category? I haven't seen separate category for filebeat Did I miss something?

If so, can we move it to proper one?

I should also add the output og the command:

[root@fbeat filebeat]# filebeat test output
logstash: 192.168.0.61:5044...
  connection...
    parse host... OK
    dns lookup... OK
    addresses: 192.168.0.61
    dial up... OK
  TLS... WARN secure connection disabled
  talk to server... OK

I don;t understand why it gives WARN.. as you can see from my filebeat config - it is enabled

I've commented out tls line in filebeat.yml and entered username and password lines... now it looks good, no errors in logs

Can you tell me how can I verify that logstash receives data.

I checked with tcpdump port 5044 on logstash and I can see some data are coming.

But not sure if it is real data or let's say ARP requests

we can close the thread. I will send logs to Elasticsearch directly for the moment and come back later on

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.