Dockerized Logstash and netflow module install issue

Hello,

I am currently having issues with installing the Netflow module inside of dockerized logstash

Docker file

FROM docker.elastic.co/logstash/logstash:7.3.0

ADD modules.yml /usr/share/logstash/config/
RUN bin/logstash --modules netflow --setup 

Inside the modules.yml file

modules:
- name: netflow
  var.elasticsearch.hosts: "127.0.0.1:9200"
  var.elasticsearch.username: "elastic"
  var.elasticsearch.password: "changeme"
  var.kibana.host: "127.0.0.1:5601"
  var.kibana.username: "elastic"
  var.kibana.password: "changeme"

The log messages I am getting in the docker container

Step 12/12 : RUN bin/logstash --modules netflow --setup
 ---> Running in 313d7506c713
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (file:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.2.7.0.jar) to field java.io.FileDescriptor.fd
WARNING: Please consider reporting this to the maintainers of com.headius.backport9.modules.Modules
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Thread.exclusive is deprecated, use Thread::Mutex
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2019-08-03T18:53:00,092][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2019-08-03T18:53:00,110][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2019-08-03T18:53:00,465][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-08-03T18:53:00,471][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.3.0"}
[2019-08-03T18:53:00,498][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"3cfe1c3c-ef79-458c-89dc-f428492b84c0", :path=>"/usr/share/logstash/data/uuid"}
[2019-08-03T18:53:00,990][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch configuration. Please explicitly set `xpack.monitoring.enabled: true` in logstash.yml
[2019-08-03T18:53:01,826][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2019-08-03T18:53:02,017][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch: Name or service not known"}
[2019-08-03T18:53:02,135][WARN ][logstash.licensechecker.licensereader] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch {:url=>http://elasticsearch:9200/, :error_message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[2019-08-03T18:53:02,144][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch"}
[2019-08-03T18:53:02,187][ERROR][logstash.monitoring.internalpipelinesource] Failed to fetch X-Pack information from Elasticsearch. This is likely due to failure to reach a live Elasticsearch cluster.
[2019-08-03T18:53:02,334][INFO ][logstash.config.modulescommon] Setting up the netflow module
[2019-08-03T18:53:02,564][ERROR][logstash.modules.kibanaclient] Error when executing Kibana client request {:error=>#<Manticore::SocketException: Connection refused (Connection refused)>}
[2019-08-03T18:53:02,643][ERROR][logstash.modules.kibanaclient] Error when executing Kibana client request {:error=>#<Manticore::SocketException: Connection refused (Connection refused)>}
[2019-08-03T18:53:02,695][ERROR][logstash.config.sourceloader] Could not fetch all the sources {:exception=>LogStash::ConfigLoadingError, :message=>"Failed to import module configurations to Elasticsearch and/or Kibana. Module: netflow has Elasticsearch hosts: [\"localhost:9200\"] and Kibana hosts: [\"localhost:5601\"]", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/config/modules_common.rb:108:in `block in pipeline_configs'", "org/jruby/RubyArray.java:1792:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/config/modules_common.rb:54:in `pipeline_configs'", "/usr/share/logstash/logstash-core/lib/logstash/config/source/modules.rb:14:in `pipeline_configs'", "/usr/share/logstash/logstash-core/lib/logstash/config/source_loader.rb:61:in `block in fetch'", "org/jruby/RubyArray.java:2572:in `collect'", "/usr/share/logstash/logstash-core/lib/logstash/config/source_loader.rb:60:in `fetch'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:148:in `converge_state_and_update'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:96:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:367:in `block in execute'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/task.rb:24:in `block in initialize'"]}
[2019-08-03T18:53:02,703][ERROR][logstash.agent           ] An exception happened when converging configuration {:exception=>RuntimeError, :message=>"Could not fetch the configuration, message: Failed to import module configurations to Elasticsearch and/or Kibana. Module: netflow has Elasticsearch hosts: [\"localhost:9200\"] and Kibana hosts: [\"localhost:5601\"]", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/agent.rb:155:in `converge_state_and_update'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:96:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:367:in `block in execute'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/task.rb:24:in `block in initialize'"]}
[2019-08-03T18:53:02,924][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2019-08-03T18:53:07,994][INFO ][logstash.runner          ] Logstash shut down.
The command '/bin/sh -c bin/logstash --modules netflow --setup' returned a non-zero code: 1

Any words of wisdom? Am I missing something dumb?

Are you by any chance running an Elasticsearch cluster based on the OSS distribution rather than the default one?

I don't think so. I am getting all my docker images from the default ElasticSearch Docker image hub.

for Logstash docker image I used this link
https://www.elastic.co/guide/en/logstash/current/docker.html

and to learn how to configure logstash I used
https://www.elastic.co/guide/en/logstash/current/docker-config.html

I also setup the passwords on the ElasticSearch server by going and running
bin/elasticsearch-setup-passwords interactive (in the elasticsearch server)

@Christian_Dahlqvist
To anyone that finds this post I think I found the fix. Kinda over thought this whole process.

Set you logstash yml like so.

modules:
- name: netflow
  var.elasticsearch.hosts: "127.0.0.1:9200"
  var.elasticsearch.username: "elastic"
  var.elasticsearch.password: "changeme"
  var.kibana.host: "127.0.0.1:5601"
  var.kibana.username: "elastic"
  var.kibana.password: "changeme"

in your docker image file I removed this line

RUN bin/logstash --modules netflow --setup

once done check your docker logs of your logstash server and might get lucky and see.

Only took me like 10 hours to figure this mess out lol

1 Like

I'm going to try this tomorrow, thanks!

Also, there is a strange thing where I've found that maybe logstash is overwriting the netflow router host field with (I think) a Docker IP address: "host": "172.19.0.1".

Check your docker-compose file that might be the issue there if you are exposing porting

Was a problem with logstash being on docker bridge network by default. I had to move it to host (to not overwrite the netflow source IP).

services:
logstash:
network_mode: host
image: "docker.elastic.co/logstash/logstash:{
hostname: "{{ansible_hostname}}"
container_name: logstash-netflow

There is a big thread on this:

Were you able to get the default netflow dashboards working? I still have not, although my netflow/logstash pipleline is working well.

This command cannot be run from within the container:
/bin/sh -c bin/logstash --modules netflow --setup

as I get:
Logstash could not be started because there is already another instance using the configured data directory. If you wish to run multiple instances, you must change the "path.data" setting.

Not yet. Trying to figure that one out next.

Keep me updated on your progress also

Getting the same error as you right now

after when you got it working did your of you .conf files stop working? etc/logstash/conf.d

right now I got Netflow Working but I am trying to get other conf files to be loaded into logstash.

Any idea?

just imported the visualizations by hand .https://github.com/elastic/logstash/tree/master/modules/netflow/configuration/kibana/7.x
worked fine .

Wow, you imported all those JSON files in manually?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.