Beats Management - SSL Configuration in Logstash Output

Hi,

I have 2 questions about Beat Central Management in Kibana UI.

Brief explanation:
I have 5 Logstash servers, all using SSL config, and I am exploring capability on Beat Central Management.

Questions:

  1. Would you give an example on how to input multiple hosts in Logstash output block?
    Currently I have:
    elksapp01.uat.thisdomain.com:5145
    I want something like:
    [elksapp01.uat.thisdomain.com:5145;elksapp02.uat.thisdomain.com:5145;elksapp03.uat.thisdomain.com:5145;logstash.uat.thisdomain.com:5145]

  2. My Logstash servers need SSL certs to connect. How do I configure that in the Logstash Output Block?
    FYI I tried putting the certs in /etc/ssl/certs and /etc/ssl/private, but seems like filebeat did not go through there because output.logstash.ssl.enabled is not configured?
    Here is the filebeat log:

...OMIT...
2019-03-29T10:15:47.036-0400 INFO crawler/crawler.go:106 Loading and starting Inputs completed. Enabled inputs: 0
2019-03-29T10:15:51.348-0400 INFO [centralmgmt] management/manager.go:176 New configurations retrieved
2019-03-29T10:15:51.348-0400 INFO [centralmgmt] management/manager.go:213 Applying settings for filebeat.inputs
2019-03-29T10:15:51.348-0400 INFO log/input.go:138 Configured paths: [/etc/filebeat/test/folder_a/test*.log]
2019-03-29T10:15:51.348-0400 INFO input/input.go:114 Starting input of type: log; ID: 2337557937799090423
2019-03-29T10:15:51.348-0400 INFO log/input.go:138 Configured paths: [/etc/filebeat/test/folder_b/test*.log]
2019-03-29T10:15:51.348-0400 INFO input/input.go:114 Starting input of type: log; ID: 9513467993454165089
2019-03-29T10:15:51.348-0400 INFO [centralmgmt] management/manager.go:213 Applying settings for output
2019-03-29T10:15:51.350-0400 INFO [centralmgmt] management/manager.go:213 Applying settings for filebeat.modules
2019-03-29T10:15:51.350-0400 INFO [centralmgmt] management/manager.go:149 Storing new state
2019-03-29T10:16:17.044-0400 INFO [monitoring] ...OMIT...
2019-03-29T10:16:47.034-0400 INFO [monitoring] ...OMIT...
2019-03-29T10:17:17.031-0400 INFO [monitoring] ...OMIT...
2019-03-29T10:17:47.036-0400 INFO [monitoring] ...OMIT...
2019-03-29T10:18:17.037-0400 INFO [monitoring] ...OMIT...
2019-03-29T10:18:47.034-0400 INFO [monitoring] ...OMIT...
2019-03-29T10:19:11.361-0400 INFO log/harvester.go:254 Harvester started for file: /etc/filebeat/test/folder_a/test_a.log
2019-03-29T10:19:17.033-0400 INFO [monitoring] ...OMIT...
2019-03-29T10:19:17.363-0400 INFO pipeline/output.go:95 Connecting to backoff(async(tcp://elksapp01.uat.thisdomain.com:5145))
2019-03-29T10:19:17.373-0400 INFO pipeline/output.go:105 Connection to backoff(async(tcp://elksapp01.uat.thisdomain.com:5145)) established
2019-03-29T10:19:17.410-0400 ERROR logstash/async.go:256 Failed to publish events caused by: read tcp 10.0.2.15:42568->10.193.68.120:5145: read: connection reset by peer
2019-03-29T10:19:17.413-0400 ERROR logstash/async.go:256 Failed to publish events caused by: client is not connected
2019-03-29T10:19:18.413-0400 ERROR pipeline/output.go:121 Failed to publish events: client is not connected
...OMIT...

Hi @perryparktung

For this, You need to configure second pipeline in logstash o/p if you wan to send your data on multiple elasticsearch instances. Please find below link for the same and go to section "Writing to Multiple Elasticsearch Nodes".

For this you need to configure SSL settings in your filebeat.yml and logstash.yml file. Please find link for the same Secure communication with Logstash | Filebeat Reference [8.11] | Elastic

Please do let me know in case of any help/issue i this regard.

Regards,
Harsh Bajaj

Hi @harshbajaj16,

Thanks for explaining. Actually we have a decoupled structure with 5 LS and 9 ES nodes in the cluster. Currently we have filebeats installed in our client's hosts, and the filebeats are pointing to our 5 LS with "load balancing" enabled.

The filebeat configurations are stored in filebeat.yml
A typical filebeat.yml looks like this:

filebeat:
  prospectors:
    -  
      paths:
      - /this/path/*.log
      input_type: log
      ignore_older: 168h
      multiline:
        pattern: '^([[:digit:]]{2,4})|^\[|^([[:digit:]]{1,4}(\/|-)[[:digit:]]{1,2})'
        negate: true
        match: after
      timeout: 5s
      backoff: 5s
  registry_file: "/var/lib/filebeat/registry"
fields:
      appCode: '9052'
#================================ Outputs ======================================
output:
  logstash:
    hosts: 
    - elksapp01.uat.thisdomain.com:5145
    - elksapp02.uat.thisdomain.com:5145
    - elksapp03.uat.thisdomain.com:5145
    loadbalance: true
    ssl:
      certificate_authorities: ["/etc/filebeat/certs/chain.crt"]
      certificate: "/etc/filebeat/certs/agent.crt"
      key: "/etc/filebeat/certs/agent.key"
#================================ Logging ======================================
logging:
  level: info
  to_files: true
  to_syslog: false
  files:
    path: /var/log/filebeat/
    name: filebeat.log
    keepfiles: 7

We are exploring the feasibility of migrating the configurations to "Centralized Beat Management" so we can have control of their configurations, and to limit client's visibility of the filebeat configurtion.

Please share your thoughts on that. So far as I understand not all configurations could be managed in "Centralized Beat Management". Thanks.

Hi @perryparktung,

Sorry, But i'm not familiar with "Centralized Beat Management" as I've never used this.

However, Please find below community link for your reference and hope it will be helpful.
https://www.elastic.co/guide/en/beats/filebeat/current/configuration-central-management.html

Regards,
Harsh Bajaj

1 Like