Logstash geoip databasemanager with outgoing proxy

Hi all, I've upgraded to Logstash 7.14 from 7.11, and now I'm having to deal with the geoip databasemanager. My servers do not have direct database access and need to go out via proxy.

Current observed behaviour is that the failing connection (packets being dropped) seems to cause Logstash to hang the pipeline, which is surprising from the documentation. I managed to find the following log entry:

logstash[10061]: [ERROR] 2021-08-08 20:26:30.582 [Ruby-0-Thread-336: :1] databasemanager - execution expired {:cause=>#}

(that may be a little truncated, not sure)

There is no documentation yet for configuring logstash-filter-geoip to use a proxy, but will attempt to configure this at a Java level.

In the meantime, any success stories around this?

Hmmm, don't attempt to configure proxy at the Java level.... what a mess, particularly as relates to things like Elasticsearch outputs and ... well lot's of other potential things.

Probably better to use the regular geoipupdate tool and a specified 'database' for now at least.

For anyone else using Ansible to deploy Logstash, I've put part of my playbook as a public Gist. This deploys MaxMind's geoipupdate (direct from MaxMind) and configures it according to variables to use a proxy.

Ansible deployment for MaxMind geoipupdate

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.