How to connect two servers to Filebeat?

I have 2 servers HTTPS with let's Encrypt :

To install and configure ELK, I followed this tutorial :

On the server monitoring.example.com I have this configuration :

$ sudo nano /etc/nginx/sites-available/monitoring-example-com

server {
    listen 80 default_server;
    listen [::]:80;
    server_name monitoring.example.com;

    location / {
        return 301 https://monitoring.example.com$request_uri;
    }
}

server {
    listen 443 default_server ssl http2;
    listen [::]:443 ssl http2;
    server_name monitoring.example.com;

    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/htpasswd.monitoring;

    ssl_certificate /etc/letsencrypt/live/monitoring.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/monitoring.example.com/privkey.pem;
    include /etc/letsencrypt/options-ssl-nginx.conf;
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;

    location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}

$ sudo nano /etc/logstash/conf.d/03-beats-input.conf

input {
  beats {
    port => 5044
  }
}

$ sudo nano /etc/logstash/conf.d/10-syslog-filter.conf

filter {
  if [fileset][module] == "system" {
    if [fileset][name] == "auth" {
      grok {
        match => { "message" => ["%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:\[%{POSINT:[system][auth][pid]}\])?: %{DATA:[system][auth][ssh][event]} %{DATA:[system][auth][ssh][method]} for (invalid user )?%{DATA:[system][auth][user]} from %{IPORHOST:[system][auth][ssh][ip]} port %{NUMBER:[system][auth][ssh][port]} ssh2(: %{GREEDYDATA:[system][auth][ssh][signature]})?",
                  "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:\[%{POSINT:[system][auth][pid]}\])?: %{DATA:[system][auth][ssh][event]} user %{DATA:[system][auth][user]} from %{IPORHOST:[system][auth][ssh][ip]}",
                  "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:\[%{POSINT:[system][auth][pid]}\])?: Did not receive identification string from %{IPORHOST:[system][auth][ssh][dropped_ip]}",
                  "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sudo(?:\[%{POSINT:[system][auth][pid]}\])?: \s*%{DATA:[system][auth][user]} :( %{DATA:[system][auth][sudo][error]} ;)? TTY=%{DATA:[system][auth][sudo][tty]} ; PWD=%{DATA:[system][auth][sudo][pwd]} ; USER=%{DATA:[system][auth][sudo][user]} ; COMMAND=%{GREEDYDATA:[system][auth][sudo][command]}",
                  "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} groupadd(?:\[%{POSINT:[system][auth][pid]}\])?: new group: name=%{DATA:system.auth.groupadd.name}, GID=%{NUMBER:system.auth.groupadd.gid}",
                  "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} useradd(?:\[%{POSINT:[system][auth][pid]}\])?: new user: name=%{DATA:[system][auth][user][add][name]}, UID=%{NUMBER:[system][auth][user][add][uid]}, GID=%{NUMBER:[system][auth][user][add][gid]}, home=%{DATA:[system][auth][user][add][home]}, shell=%{DATA:[system][auth][user][add][shell]}$",
                  "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} %{DATA:[system][auth][program]}(?:\[%{POSINT:[system][auth][pid]}\])?: %{GREEDYMULTILINE:[system][auth][message]}"] }
        pattern_definitions => {
          "GREEDYMULTILINE"=> "(.|\n)*"
        }
        remove_field => "message"
      }
      date {
        match => [ "[system][auth][timestamp]", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
      }
      geoip {
        source => "[system][auth][ssh][ip]"
        target => "[system][auth][ssh][geoip]"
      }
    }
    else if [fileset][name] == "syslog" {
      grok {
        match => { "message" => ["%{SYSLOGTIMESTAMP:[system][syslog][timestamp]} %{SYSLOGHOST:[system][syslog][hostname]} %{DATA:[system][syslog][program]}(?:\[%{POSINT:[system][syslog][pid]}\])?: %{GREEDYMULTILINE:[system][syslog][message]}"] }
        pattern_definitions => { "GREEDYMULTILINE" => "(.|\n)*" }
        remove_field => "message"
      }
      date {
        match => [ "[system][syslog][timestamp]", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
      }
    }
  }
}

$ sudo nano /etc/logstash/conf.d/30-elasticsearch-output.conf

output {
  elasticsearch {
    hosts => ["localhost:9200"]
    manage_template => false
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
  }
}

$ sudo nano /etc/filebeat/filebeat.yml

output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["localhost:9200"]

by

#output.elasticsearch:
  # Array of hosts to connect to.
  #hosts: ["localhost:9200"]

and

#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

by

output.logstash:
  # The Logstash hosts
  hosts: ["localhost:5044"]

Filebeat works and I have the logs in Kibana :wink:

Allow port 5044 with UFW

$ sudo ufw allow 5044

I created an SSL certificate :

$ sudo mkdir -p /etc/pki/tls/certs
$ sudo mkdir /etc/pki/tls/private
$ sudo nano /etc/ssl/openssl.cnf
[ v3_ca ]
subjectAltName = IP: 11.11.111.111
$ cd /etc/pki/tls
$ sudo openssl req -config /etc/ssl/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:4096 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

I logged in at www.example.com and downloaded the certificate :

$ sudo scp -r -p root@11.11.111.111:/etc/pki/tls/certs/logstash-forwarder.crt /etc/pki/tls/certs

How to configure Filebeat on the www.example.com server ?

As long as Elasticsearch is listening on an IP that www.example.com can connect to (not just localhost) then you use the same Filebeat config but change the host to be monitoring.example.com

Or use the Logstash output In Filebeat. Same as above, Logstash would have to listen on an IP that www.example.com can connect to. You don't really need both Elasticsearch and Logstash outputs in Filebeat...

I do not understand, in all the tutorials that I read, you have to send the logs in 02-beats-input.conf in port 5044

Why do you put port 9200 that's the port of Kibana

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["monitoring.example.com:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

$ sudo nano /etc/logstash/conf.d/03-beats-input.conf (in ELK server)

input {
  beats {
    port => 5044
    ssl => true
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  }
}

$ curl -v --cacert /etc/pki/tls/certs/logstash-forwarder.crt https://monitoring.example.com:5044

* Rebuilt URL to: https://monitoring.example.com:5044/
*   Trying 2001:44d0:705:1000::4b37...
* TCP_NODELAY set
*   Trying 11.11.111.111...
* TCP_NODELAY set
* Connected to monitoring.example.com (11.11.111.111) port 5044 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/pki/tls/certs/logstash-forwarder.crt
  CApath: /etc/ssl/certs
* (304) (OUT), TLS handshake, Client hello (1):
* OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to monitoring.revolutime.com:5044 
* stopped the pause stream!
* Closing connection 0
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to monitoring.example.com:5044

I was quoting your config from above

$ sudo nano /etc/filebeat/filebeat.yml

output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["localhost:9200"]

Filebeat can ship logs to many different outputs. Logstash is one option and Elasticsearch is another one. Port 9200 is the default Elasticsearch port. Kibana's default port is 5601. That is beside point though...

If you configure outputs for both Logstash and Elasticsearch in Filebeat you will have all logs as duplicates.

Also, your Logstash config does not use SSL, just FYI. That curl will fail...

I updated my answer. I want to send to Logstash because I have filters. What's wrong with my configuration ?

If I change /etc/logstash/conf.d/03-beats-input.conf

input {
  beats {
    port => 5044
  }
}

in

input {
  beats {
    port => 5044
    ssl => true
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  }
}

Localhost logs are no longer sent. Should we create a new entry ?
/etc/logstash/conf.d/04-beats-input.conf

And the www.example.com log does not work

You do not have to use SSL. I was just pointing out that you were trying to make a HTTPS connection to a port that will not support it...

The first thing to test from www.example.com is that you can actually connect to Logstash. For that you can do e.g.

$ nc -v -z monitoring.example.com 5044

You will need to have netcat installed

If that fails, there might be a firewall blocking connections of Logstash is not listening on the IP it should.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.