I checked twice with my domain admin, actually there is no firewall between machines, moreover I am pretty sure there is a continuous data exchange (I can see it from kibana too)
just as example this morning I connected all the logger machines, after the 6th I started getting some errors. Whene LS was warmed only one machine out of 10 continued returning that error every about 15s
I closed everything and changed the config, but nothing changed
yesterday with a smaller LS cluster I arrived at 6 working machines
I improved my LS cluster, now it is using 2 machine
LS yml is the following
path.data: /var/lib/logstash
pipeline.workers: 24
pipeline.output.workers: 1
pipeline.batch.size: 15000
path.config: /etc/logstash/conf.d
config.reload.automatic: true
config.reload.interval: 30
log.level: verbose
path.log: /var/log/logstash/logstash.log
jvm.option is the standard one, I only changed
-Xms12g
-Xmx12g
FB conf is the following
filebeat.prospectors:
- input_type: log
paths:
- d:Logger\1
exclude_lines: ["^\\s*$"]
include_lines: ["^([a-z]|[A-Z])"]
fields:
catena: 11
fields_under_root: true
ignore_older: 10m
close_inactive: 2m
clean_inactive: 15m
document_type: log1
- input_type: log
paths:
- d:Logger\2
exclude_lines: ["^\\s*$"]
include_lines: ["^([a-z]|[A-Z])"]
fields:
catena: 11
fields_under_root: true
ignore_older: 10m
close_inactive: 2m
clean_inactive: 15m
document_type: log2
- input_type: log
paths:
- d:Logger\3
exclude_lines: ["^\\s*$"]
include_lines: ["^([a-z]|[A-Z])"]
fields:
catena: 11
fields_under_root: true
ignore_older: 10m
close_inactive: 2m
clean_inactive: 15m
document_type: log3
- input_type: log
paths:
- d:Logger\4
exclude_lines: ["^\\s*$"]
include_lines: ["^([a-z]|[A-Z])"]
fields:
catena: 11
fields_under_root: true
ignore_older: 10m
close_inactive: 2m
clean_inactive: 15m
document_type: log4
- input_type: log
paths:
- d:Logger\5
exclude_lines: ["^\\s*$"]
include_lines: ["^([a-z]|[A-Z])"]
fields:
catena: 11
fields_under_root: true
ignore_older: 10m
close_inactive: 2m
clean_inactive: 15m
document_type: log5
- input_type: log
paths:
- d:Logger\6
exclude_lines: ["^\\s*$"]
include_lines: ["^([a-z]|[A-Z])"]
fields:
catena: 11
fields_under_root: true
ignore_older: 10m
close_inactive: 2m
clean_inactive: 15m
document_type: log6
- input_type: log
paths:
- d:Logger\7
exclude_lines: ["^\\s*$"]
include_lines: ["^([a-z]|[A-Z])"]
fields:
catena: 11
fields_under_root: true
ignore_older: 10m
close_inactive: 2m
clean_inactive: 15m
document_type: log7
#========================= Filebeat global options ============================
filebeat.spool_size: 10000
filebeat.idle_timeout: 10s
#----------------------------- Logstash output --------------------------------
output.logstash:
hosts: ["10.246.85.242:5044", "10.246.85.243:5044"]
template.name: "filebeat"
loadbalance: true
bulk_max_size: 1000
template.path: "filebeat.template.json"
template.overwrite: false