the loadbalance: false
option doesn't take random endpoint to connect to.
I have a list in hosts that consists of 30 endpoints.
I run 18 instances of filebeat with this configuration.
They all try to connect to exactly the same enpoint (it's a 3rd one from the hosts list).
Should I create an issue for it? Or is it a known bug?
steffens
(Steffen Siering)
March 4, 2016, 12:03am
2
sounds weird, could be a bug. Can you share your complete config (without comments)? I'd like to test to confirm bug or not.
Thanks.
Sure, here it is:
filebeat:
prospectors:
-
paths:
- /opt/idm/logs/essential-01.log
- /opt/idm/logs/essential-01a.log
- /opt/idm/logs/essential-01b.log
- /opt/idm/logs/essential-01c.log
- /opt/idm/logs/ident-01.log
- /opt/idm/logs/extended-01.log
- /opt/idm/logs/administrative-01.log
input_type: log
fields:
environment: production
fields_under_root: true
document_type: dip
publish_async: true
multiline:
pattern: ^[0-9]{4,8}
negate: true
match: after
-
paths:
- /opt/idm/logs/essential-txn-audit-01.log
- /opt/idm/logs/essential-txn-audit-01a.log
- /opt/idm/logs/essential-txn-audit-01b.log
- /opt/idm/logs/essential-txn-audit-01c.log
- /opt/idm/logs/extended-txn-audit-01.log
- /opt/idm/logs/ident-txn-audit-01.log
- /opt/idm/logs/administrative-txn-audit-01.log
input_type: log
fields:
environment: production
fields_under_root: true
document_type: dip-txn
publish_async: true
multiline:
pattern: ^[0-9]{8,8}
negate: true
match: after
-
paths:
- /opt/idm/essential*/current/logs/localhost_access_log.*.txt
- /opt/idm/ident/current/logs/localhost_access_log.*.txt
- /opt/idm/extended/current/logs/localhost_access_log.*.txt
- /opt/idm/administrative/current/logs/localhost_access_log.*.txt
input_type: log
fields:
environment: production
fields_under_root: true
document_type: dip-access-log
publish_async: true
-
paths:
- /opt/idm/logs/server-status/networkStatus.log
input_type: log
fields:
environment: production
fields_under_root: true
document_type: network
publish_async: true
-
paths:
- /opt/idm/logs/server-status/systemUsage.log
input_type: log
fields:
environment: production
fields_under_root: true
document_type: usage
publish_async: true
-
paths:
- /opt/idm/logs/server-status/aliveLog.log
input_type: log
fields:
environment: production
fields_under_root: true
document_type: alive
publish_async: true
output:
logstash:
hosts: ["10.141.51.18:5002","10.141.51.18:5004","10.141.51.18:5006","10.141.51.18:5008","10.141.51.18:5010","10.141.51.19:5000","10.141.51.19:5001","10.141.51.19:5002","10.141.51.19:5003","10.141.51.19:5004","10.141.51.19:5005","10.141.51.19:5006","10.141.51.19:5007","10.141.51.19:5008","10.141.51.19:5009","10.141.51.19:5010"]
loadbalance: false
logging:
to_syslog: false
to_files: true
files:
path: /opt/idm/logs/monitoring
name: filebeat-dip
rotateeverybytes: 10485760 # = 10MB
steffens
(Steffen Siering)
March 4, 2016, 2:44pm
4
Thanks, will check.
The publish_async option is no prospector option, but a filebeat option. Actually publish_async is mostly useful if loadbalance: true
.
Hello Stefen,
Were you able to check if it's a filebeat bug or just my issue?
ruflin
(ruflin)
March 8, 2016, 10:13am
6
@Karol_Stojek As Steffen is currently on vacation, I plan to have a look at this. Which filebeat version are you using?
ruflin
(ruflin)
March 14, 2016, 9:57am
8
@Karol_Stojek I didn't have found time yet to try to reproduce this. Did you find in the meantime something on your side?
@ruflin no, switched to loadbalance: true (and publish_async: true) for now.
But I had this issue 2 times with filebeats.
It was working fine with logstash-forwarder as I remember - random endpoint was selected.
ruflin
(ruflin)
March 24, 2016, 9:03am
10
I finally found the time to test this. It is indeed that it always tries to connect to the same host. I tried different host options and it is not necessarly always the third. Once it was also the first one but with the same config, it stays the same host. There seems to be some sorting involved.
@steffens Can you have a look into this?
ruflin
(ruflin)
March 24, 2016, 9:03am
11
@Karol_Stojek Can you open a Github issue based on this? And sorry for the long waiting.