Got error in ELK cluster of 3 nodes

Hello,

I have 3 ubuntu server lets their name are A, B and C. As of now I am trying to configure for syslog.

I got the following error in filebeat log of sever A as :
ERR Connecting error publishing events (retrying): read tcp 10.10.10.20:55752>10.10.10.21:5044: read: connection reset by peer and on machine B I found this error "[ERROR][logstash.outputs.elasticsearch] Action" in logstash-plain.log

Kindly suggest the appropriate way to configure .

On sever A I have installed elasticsearch and filebeat, their non commented config file looks like :

elasticsearch.yml:
cluster.name: syscluster
node.name: es-data-01
node.data: true
network.host: 10.10.10.20
http.port: 9200

filebeat.yml:
filebeat.prospectors:
- input_type: log
paths:
- /var/log/auth.log
- /var/log/syslog
output.elasticsearch:
hosts: ["10.10.10.21:9200"]
output.logstash:
hosts: ["10.10.10.21:5044"]
bulk_max_size: 2048
ssl.certificate_authorities: ["/etc/filebeat/logstash.crt"]
template.name: "filebeat"
template.path: "filebeat.template.json"
template.overwrite: false

On server B I have installed elasticsearch, logstash and filebeat, their non commented config file looks like

elasticsearch.yml:
cluster.name: syscluster
node.name: es-client-01
node.data: false
network.host: 10.10.10.21
http.port: 9200
discovery.zen.ping.unicast.hosts: ["10.10.10.21", "10.10.10.20","10.10.10.22"]

logstash,yml:
path.data: /var/lib/logstash
path.config: /etc/logstash/conf.d
http.host: "10.10.10.21"
http.port: 9600-9700
path.logs: /var/log/logstash

filebeat.yml:
filebeat.prospectors:
- input_type: log
paths:
- /var/log/auth.log
- /var/log/syslog
output.elasticsearch:
hosts: ["10.10.10.21:9200"]
output.logstash:
hosts: ["10.10.10.21:5044"]
bulk_max_size: 1024

On server C I have installed elasticsearch only and its non commented config file looks like

elasticsearch.yml:
cluster.name: syscluster
node.name: es-master-01
node.master: true
node.data: false
network.host: 10.10.10.22

Uhm, which filebeat is forwarding to which logstash and which logstash to elasticsearch? Can we start with one setup/use-case?

Which versions have you running?

The error messages indicates the TCP connection being closed by or on behalf of logstash? Filebeat will reconnect and send again. does the error go on?

Personally I don't recommend 2 output types by one beat, due to generating some indirect coupling between 2 downstream systems in beats. I'd rather opt having Logstash forward to other outputs. Especially with Logstash 5.4 finally GAing persistent queues support.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.