I have filebeat version 5.5.1 (ssl)-> logstash version 5.5.1 talking to elasticsearch. I have 2 prospectors defined but only authlog seems to be sending. I have tried increasing and decreasing the bulk_max_size, using compression, and setting "greedydata" as my grok match. None of this seems to work.
I continually get the following errors:
2017-09-28T22:59:35Z ERR Connecting error publishing events (retrying): read tcp 172.21.16.195:56310->10.64.0.99:5044: i/o timeout
here is my filebeat config:
filebeat:
prospectors:
-
paths:
- /var/log/auth.log
input_type: log
document_type: authlog
ignore_older: 24h
scan_frequency: 10s
backoff: 1s
max_backoff: 10s
backoff_factor: 2
force_close_files: false
fields_under_root: false
close_older: 1h
-
paths:
- /mnt/es/log/*indexing*.log
input_type: log
document_type: es-indexing-log
ignore_older: 24h
scan_frequency: 10s
backoff: 1s
max_backoff: 10s
backoff_factor: 2
force_close_files: false
fields_under_root: false
close_older: 1h
tags: ["es-indexing-log"]
registry_file: /var/lib/filebeat/registry
output:
tags: ["elasticsearch", "prod", "green"]
logstash:
bulk_max_size: 1024
hosts: ["10.64.0.99:5044"]
worker: 1
loadbalance: true
timeout: 40
ssl.certificate_authorities: ["/etc/ssl/certs/logstash-forwarder.crt"]
shipper:
logging:
files:
rotateeverybytes: 10485760 # = 10M
Here is my logstash input section for beats:
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
I read that it may be my grok filter so I have changed it to just %{GREEDYDATA} . but still no luck. In any case here is my slow filter:
filter {
if [type] == "es-search-log" {
mutate {
add_tag => [ "elasticsearch" ]
}
grok {
match => { "message" => "%{GREEDYDATA}"}
}
date {
match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
}