Netflow module launch errors, some SSL related(?)

Was hoping someone might be able to advise.

Running: latest versions of esearch, kibana, and logstash installed via repo plus Oracle Java 1.8.0_181 on a Ubuntu Server 18.04 VM.

I'm trying to set up an ELK stack for netflow, but when running the following command a lot of errors are produced and the process is clearly failing.

as root: /usr/share/logstash/bin/logstash --modules netflow --setup -M netflow.var.input.udp.port=2055

WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2018-09-23 14:44:50.001 [main] writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[INFO ] 2018-09-23 14:44:50.024 [main] writabledirectory - Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[WARN ] 2018-09-23 14:44:50.681 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2018-09-23 14:44:50.764 [LogStash::Runner] agent - No persistent UUID file found. Generating new UUID {:uuid=>"b76cbfb0-1367-46fd-ab15-152e23340407", :path=>"/usr/share/logstash/data/uuid"}
[INFO ] 2018-09-23 14:44:51.878 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"6.4.1"}
[INFO ] 2018-09-23 14:44:52.161 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] modulescommon - Setting up the netflow module
[ERROR] 2018-09-23 14:44:53.468 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] kibanaclient - Error when executing Kibana client request {:error=>#<Manticore::UnknownException: Unrecognized SSL message, plaintext connection?>}
[ERROR] 2018-09-23 14:44:53.747 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] kibanaclient - Error when executing Kibana client request {:error=>#<Manticore::UnknownException: Unrecognized SSL message, plaintext connection?>}
[ERROR] 2018-09-23 14:44:54.114 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] sourceloader - Could not fetch all the sources {:exception=>LogStash::ConfigLoadingError, :message=>"Failed to import module configurations to Elasticsearch and/or Kibana. Module: netflow has Elasticsearch hosts: [\"localhost:9200\"] and Kibana hosts: [\"localhost:5601\"]", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/config/modules_common.rb:108:in `block in pipeline_configs'", "org/jruby/RubyArray.java:1734:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/config/modules_common.rb:54:in `pipeline_configs'", "/usr/share/logstash/logstash-core/lib/logstash/config/source/modules.rb:14:in `pipeline_configs'", "/usr/share/logstash/logstash-core/lib/logstash/config/source_loader.rb:61:in `block in fetch'", "org/jruby/RubyArray.java:2481:in `collect'", "/usr/share/logstash/logstash-core/lib/logstash/config/source_loader.rb:60:in `fetch'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:142:in `converge_state_and_update'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:93:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:362:in `block in execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:24:in `block in initialize'"]}
[ERROR] 2018-09-23 14:44:54.137 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] agent - An exception happened when converging configuration {:exception=>RuntimeError, :message=>"Could not fetch the configuration, message: Failed to import module configurations to Elasticsearch and/or Kibana. Module: netflow has Elasticsearch hosts: [\"localhost:9200\"] and Kibana hosts: [\"localhost:5601\"]", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/agent.rb:149:in `converge_state_and_update'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:93:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:362:in `block in execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:24:in `block in initialize'"]}
[INFO ] 2018-09-23 14:44:54.589 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}

yml's
kibana:

server.port: 5601
server.host: "0.0.0.0"
elasticsearch.url: "http://localhost:9200"

elasticsearch:

network.host: localhost
http.port: 9200

I know esearch's running:

curl http://localhost:9200
{
  "name" : "97Gu603",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "vcIPJDjZTJScJDFqctebcw",
  "version" : {
    "number" : "6.4.1",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "e36acdb",
    "build_date" : "2018-09-13T22:18:07.696808Z",
    "build_snapshot" : false,
    "lucene_version" : "7.4.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

and logstash:

bin/logstash -e 'input { stdin { } } output { stdout {} }'

hello world
{
       "message" => "hello world",
      "@version" => "1",
          "host" => "elk",
    "@timestamp" => 2018-09-22T08:22:38.284Z

And Kibana is accessible via browser on the host machine.

I've tried searching the various errors but haven't found any fixes.

All help appreciated.

After doing some more digging I've tried using the following

/usr/share/logstash/bin/logstash --modules netflow --setup -M netflow.var.input.udp.port=2055 -M var.elasticsearch.hosts: "localhost:9200 -M var.kibana.host: "localhost:5601"

Didn't help.

I ran into the same problem. From what I have come to understand from the error messages and various testing, the documented defaults claiming to prefer plaintext are incorrect. During module setup, Logstash instead seems to require SSL, and will barf if it sees plaintext.

The following settings in logstash.yml seems to work fine for setup:

modules:
  - name: netflow
    var.input.udp.port: 2055
    var.elasticsearch.hosts: http://127.0.0.1:9200
    var.elasticsearch.ssl.enabled: false
    var.kibana.host: 127.0.0.1:5601
    var.kibana.scheme: http
    var.kibana.ssl.enabled: false
    var.kibana.ssl.verification_mode: disable

I started the setup with only the following:

--modules netflow --setup

The other options were read from logstash.yml

Note: Your command-line switches are incorrect. The syntax is -M netflow.var.whatever=somevalue, so you have to prepend netflow. to each setting, and use = and not :.

Edit: Using 6.4.1 for all components. X-Pack functionality is not explicitly activated anywhere.

2 Likes

Thanks for your help Bjorn but unfortunately it hasn't helped.

Here's the relevant section of my logstash.yml for referrence; same as yours bar the udp port.
(After adding the lines i restarted all ELK services.)

# ------------ Module Settings ---------------
# Define modules here.  Modules definitions must be defined as an array.
# The simple way to see this is to prepend each `name` with a `-`, and keep
# all associated variables under the `name` they are associated with, and
# above the next, like this:
#
modules:
  - name: netflow
    var.input.udp.port: 2056
    var.elasticsearch.hosts: http://127.0.0.1:9200
    var.elasticsearch.ssl.enabled: false
    var.kibana.host: 127.0.0.1:5601
    var.kibana.scheme: http
    var.kibana.ssl.enabled: false
    var.kibana.ssl.verification_mode: disable
#
# Module variable names must be in the format of
#
# var.PLUGIN_TYPE.PLUGIN_NAME.KEY
#
# modules:
#

Launched setup using:

root@elk:/usr/share/logstash# ./bin/logstash --modules netflow --setup

Same error:

WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2018-09-27 18:33:07.559 [main] writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[INFO ] 2018-09-27 18:33:07.572 [main] writabledirectory - Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[WARN ] 2018-09-27 18:33:07.950 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2018-09-27 18:33:08.007 [LogStash::Runner] agent - No persistent UUID file found. Generating new UUID {:uuid=>"1ef2d037-3a29-429d-8015-f960e5097576", :path=>"/usr/share/logstash/data/uuid"}
[INFO ] 2018-09-27 18:33:08.496 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"6.4.1"}
[INFO ] 2018-09-27 18:33:08.680 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] modulescommon - Setting up the netflow module
[ERROR] 2018-09-27 18:33:09.218 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] kibanaclient - Error when executing Kibana client request {:error=>#<Manticore::SocketException: Connection refused (Connection refused)>}
[ERROR] 2018-09-27 18:33:09.351 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] kibanaclient - Error when executing Kibana client request {:error=>#<Manticore::SocketException: Connection refused (Connection refused)>}
[ERROR] 2018-09-27 18:33:09.497 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] sourceloader - Could not fetch all the sources {:exception=>LogStash::ConfigLoadingError, :message=>"Failed to import module configurations to Elasticsearch and/or Kibana. Module: netflow has Elasticsearch hosts: [\"localhost:9200\"] and Kibana hosts: [\"localhost:5601\"]", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/config/modules_common.rb:108:in `block in pipeline_configs'", "org/jruby/RubyArray.java:1734:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/config/modules_common.rb:54:in `pipeline_configs'", "/usr/share/logstash/logstash-core/lib/logstash/config/source/modules.rb:14:in `pipeline_configs'", "/usr/share/logstash/logstash-core/lib/logstash/config/source_loader.rb:61:in `block in fetch'", "org/jruby/RubyArray.java:2481:in `collect'", "/usr/share/logstash/logstash-core/lib/logstash/config/source_loader.rb:60:in `fetch'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:142:in `converge_state_and_update'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:93:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:362:in `block in execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:24:in `block in initialize'"]}
[ERROR] 2018-09-27 18:33:09.505 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] agent - An exception happened when converging configuration {:exception=>RuntimeError, :message=>"Could not fetch the configuration, message: Failed to import module configurations to Elasticsearch and/or Kibana. Module: netflow has Elasticsearch hosts: [\"localhost:9200\"] and Kibana hosts: [\"localhost:5601\"]", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/agent.rb:149:in `converge_state_and_update'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:93:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:362:in `block in execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:24:in `block in initialize'"]}

First of all, do not do this as the root user. You might have changed/set permissions on files (e.g. log files) so that the logstash user account will not be able to access those later. Pay particular attention to the two directories mentioned in the output ([main] writabledirectory - Creating directory {...}) . Make sure to check other usual locations and update ownership accordingly.

Then, as the error message says, you can use --path.settings to tell Logstash where logstash.yml exists, so try doing that. You'd do something like this:

logstash@elk:/usr/share/logstash$ ./bin/logstash --path.settings /etc/logstash --modules netflow --setup

Damn it; read in a guide that it should be done as root... :confused:

how would i switch to the logstash account though? when i try it asks for a password I've never set and none of the typical ones (changeme, pass, password) work.

Or do i need to create a logstash user, add it to the logstash group and user that for config/launches?

Depends on how you installed Logstash - a package manager should do all this for you.

If the user account exists: su - logstash -s /bin/bash

On the other hand, if there's no dedicated account, you are of course free to keep doing this as root (although I would recommend against it for principles of separation of duties in general).

I did install via package manager. When i make any changes to a yml file should I be switching to the relevant user account to make those changes, like adjusting addressing so the stack knows it running purely off localhost?

Both su logstash and su - logstash -s /bin/bash require password authentication; god knows what the password is.

If I log in as root and change the password with passwd logstash auth still fails when trying to switch to that account, reporting This account is currently not available.

Think I might start again using the archives; getting the feeling this is the preferred method.

I don't know if this is relevant or not to be honest:

root@elk:/home/tom# su kibana
root@elk:/home/tom# su logstash
This account is currently not available.

Use -s /bin/bash when becoming the logstash user, and you have to be root (or use sudo) to do it.

Which Linux distribution are you using?

Under Ubuntu I was using the root account with -s /bin/bash

Under Centos I get:

[root@elkcentos ~]# su logstash -s /bin/bash
bash-4.2$

I've got buids on both Ubuntu Server 18.04 and Centos 7 now, but I've literally just built the Centos VM; any preferrence?

logstash@elk:~$ ./bin/logstash --path.settings /etc/logstash --modules netflow --setup
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2018-10-05T16:12:30,889][INFO ][logstash.config.source.modules] Both command-line and logstash.yml modules configurations detected. Using command-line module configuration to override logstash.yml module configuration.
[2018-10-05T16:12:30,906][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-10-05T16:12:30,920][FATAL][logstash.runner          ] Logstash could not be started because there is already another instance using the configured data directory.  If you wish to run multiple instances, you must change the "path.data" setting.
[2018-10-05T16:12:30,930][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit

I set logstash to start during boot along with Elasticsearch and Kibana

Looks like setting logstash to start as a service was causing the issue.

For clarities sake, it's OK to edit as root, but you need to execute as the applicable user account?

This is what I'm getting now:

logstash@elk:~$ ./bin/logstash --path.settings /etc/logstash --modules netflow --setup
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2018-10-05T16:26:23,384][INFO ][logstash.config.source.modules] Both command-line and logstash.yml modules configurations detected. Using command-line module configuration to override logstash.yml module configuration.
[2018-10-05T16:26:23,416][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-10-05T16:26:24,091][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.4.2"}
[2018-10-05T16:26:24,175][INFO ][logstash.config.source.modules] Both command-line and logstash.yml modules configurations detected. Using command-line module configuration to override logstash.yml module configuration.
[2018-10-05T16:26:24,304][INFO ][logstash.config.modulescommon] Setting up the netflow module
[2018-10-05T16:26:24,919][WARN ][logstash.modules.kibanaclient] SSL explicitly disabled; other SSL settings will be ignored
[2018-10-05T16:26:47,681][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"module-netflow", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-10-05T16:26:48,033][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://127.0.0.1:9200/]}}
[2018-10-05T16:26:48,045][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://127.0.0.1:9200/, :path=>"/"}
[2018-10-05T16:26:48,139][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://127.0.0.1:9200/"}
[2018-10-05T16:26:48,163][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-10-05T16:26:48,168][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-10-05T16:26:48,202][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://127.0.0.1:9200"]}
[2018-10-05T16:26:48,972][INFO ][logstash.filters.geoip   ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-geoip-5.0.3-java/vendor/GeoLite2-City.mmdb"}
[2018-10-05T16:26:49,032][INFO ][logstash.filters.geoip   ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-geoip-5.0.3-java/vendor/GeoLite2-ASN.mmdb"}
[2018-10-05T16:26:49,058][INFO ][logstash.filters.geoip   ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-geoip-5.0.3-java/vendor/GeoLite2-City.mmdb"}
[2018-10-05T16:26:49,059][INFO ][logstash.filters.geoip   ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-geoip-5.0.3-java/vendor/GeoLite2-ASN.mmdb"}
[2018-10-05T16:26:49,060][INFO ][logstash.filters.geoip   ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-geoip-5.0.3-java/vendor/GeoLite2-City.mmdb"}
[2018-10-05T16:26:49,062][INFO ][logstash.filters.geoip   ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-geoip-5.0.3-java/vendor/GeoLite2-ASN.mmdb"}
[2018-10-05T16:26:49,063][INFO ][logstash.filters.geoip   ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-geoip-5.0.3-java/vendor/GeoLite2-City.mmdb"}
[2018-10-05T16:26:49,064][INFO ][logstash.filters.geoip   ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-geoip-5.0.3-java/vendor/GeoLite2-ASN.mmdb"}
[2018-10-05T16:26:49,189][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"module-netflow", :thread=>"#<Thread:0x6bd48111 run>"}
[2018-10-05T16:26:49,351][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:"module-netflow"], :non_running_pipelines=>[]}
[2018-10-05T16:26:49,354][INFO ][logstash.inputs.udp      ] Starting UDP listener {:address=>"0.0.0.0:2056"}
[2018-10-05T16:26:49,593][INFO ][logstash.inputs.udp      ] UDP listener started {:address=>"0.0.0.0:2056", :receive_buffer_bytes=>"212992", :queue_size=>"2000"}
[2018-10-05T16:26:49,927][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

Looks like that got it working. Thanks Bjorn.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.