Run "---setup" netflow module when LS is a service (was Install netflow module error)

@tatdat ^ question

@guyboertje
Yeah, i installed logstash success and it worked with my config. Now i want run setup module netflow (import dashboard, search, visualization for kibana, dictionary, geoip, filter for logstash).

This is the process that we brainstormed.

  1. Stop the service sudo systemctl stop logstash.service
  2. Locate the various settings files for logstash, see https://www.elastic.co/guide/en/logstash/current/dir-layout.html#deb-layout
  3. Edit the logstash.yml to specify all the modules settings, so you don't need to put them on the cmd line, see https://www.elastic.co/guide/en/logstash/current/logstash-modules.html#setting-logstash-module-config
  4. Edit the /etc/logstash/startup.options file, look for the LS_OPTS entry and add --setup
  5. Start the service
  6. check in Kibana that the dashboards are uploaded.
  7. Stop the service.
  8. Remove the --setup option from LS_OPTS.
  9. Start the service.

Report back whether it worked or not.

I followed your guide but when start service , nothing happen. I checked with ps command, i saw logstash is running but when i check log file (logstash-plain.log), nothing and of course, nothing happen in kibana.

This is my config
P/s: I setup kibana running in port 443 with https

Please post your logstash.yml file contents here (inside two triple backtick ``` lines)

Yeah i updated my config

Make sure that:

var.kibana.scheme is set to "https"
var.kibana.host does not contain a "https://" prefix.
var.elasticsearch.hosts does contain a "https://" prefix

Yeah, i added var.kibana.scheme to my config but still got same result.
I'm wondering why I can not see the log file.

In logstash.yml set the log.level to debug

Then do steps 4-7 and post the full log contents here.

Here is my log file

This entry tells me that the LS_OPTS method is not working.

[2017-11-28T17:28:17,187][DEBUG][logstash.runner          ] modules_setup: false

I did not tell you to do (from the startup.options file):

################################################################################
# These settings are ONLY used by $LS_HOME/bin/system-install to create a custom
# startup script for Logstash and is not used by Logstash itself. It should
# automagically use the init system (systemd, upstart, sysv, etc.) that your
# Linux distribution uses.
#
# After changing anything here, you need to re-run $LS_HOME/bin/system-install
# as root to push the changes to the init script.
################################################################################

revised steps:

  1. Stop the service sudo systemctl stop logstash.service
  2. Locate the various settings files for logstash, see https://www.elastic.co/guide/en/logstash/current/dir-layout.html#deb-layout
  3. Edit the logstash.yml to specify all the modules settings, so you don't need to put them on the cmd line, see https://www.elastic.co/guide/en/logstash/current/logstash-modules.html#setting-logstash-module-config
  4. Edit the /etc/logstash/startup.options file, look for the LS_OPTS entry and add --setup
  5. Run $LS_HOME/bin/system-install as root
  6. Start the service
  7. check in Kibana that the dashboards are uploaded.
  8. Stop the service.
  9. Remove the --setup option from LS_OPTS.
  10. Run $LS_HOME/bin/system-install as root
  11. Start the service.

Thank @guyboertje. I followed your guilde and re check kibana. Still got nothing

Here is log file with debug level
https://pastebin.com/gf38U7eG

So now the setup is set to true.
Also, In the logs I see:
"var.elasticsearch.hosts"=>"10.1.6.195:9200" -> no scheme
"var.kibana.host"=>"log.ho.fpt.vn" -> no port
"var.kibana.scheme"=>"https"

But in your original post you had:

/usr/share/logstash/bin/logstash --path.settings=/etc/logstash --modules netflow --setup -M "netflow.var.kibana.host=http://kubana-url:5601" -M "netflow.var.output.kibana.username=kibana" -M "netflow.var.output.kibana.password=my-password" -M "netflow.var.input.udp.port=9996" -M "netflow.var.elasticsearch.host=http://10.1.1.12:9200" -M "netflow.var.elasticsearch.user=elastic" -M "netflow.var.elasticsearch.password=my-password"

What port is your Kibana server running behind?
What is the scheme for Elasticsearch and Kibana? Is it http or https?
Is log.ho.fpt.vn the Kibana server or a proxy of some kind?

Kibana server running in port 443 witl ssl enabled (dont behind reverse proxy)
schema for ES is http, kibana is https

I'm trying to setting kibaan running in default port 5601 and scheme is http but still can't setup netflow module.
p/s: i'm testing with x-pack, enable TLS transport between nodes.
Cluster ES dont have any data.

i still can't setup netflow module :cry:
Can someone help me?

i have tried that , it seems that now in my version (5.x),i can using 'systemctl start logstash' to start the logstash, and it does work, i can see all the port i've configured in my conf.d , but only netflow module can take effect.So i tried to use command line to achieve the function i want.

@antony.y, I try to run with command but not success

/usr/share/logstash/bin/logstash --modules netflow --setup -M "netflow.var.kibana.host=http://10.1.11.115:5601" -M "netflow.var.output.kibana.username=username" -M "netflow.var.output.kibana.password=password" -M "netflow.var.input.udp.port=9996" -M "netflow.var.elasticsearch.hosts=http://10.1.11.115:9200" -M "netflow.var.elasticsearch.user=username" -M "netflow.var.elasticsearch.password=password"

I got some error

WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[ERROR] 2017-12-13 13:53:10.439 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] sourceloader - Could not fetch all the sources {:exception=>LogStash::ConfigLoadingError, :message=>"Failed to parse the module configuration: [[401] ]", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/elasticsearch-transport-5.0.4/lib/elasticsearch/transport/transport/base.rb:202:in __raise_transport_error'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/elasticsearch-transport-5.0.4/lib/elasticsearch/transport/transport/base.rb:319:in perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/elasticsearch-transport-5.0.4/lib/elasticsearch/transport/transport/http/manticore.rb:67:in perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/elasticsearch-transport-5.0.4/lib/elasticsearch/transport/client.rb:131:in perform_request'", "/usr/share/logstash/logstash-core/lib/logstash/elasticsearch_client.rb:85:in head'", "/usr/share/logstash/logstash-core/lib/logstash/elasticsearch_client.rb:55:in can_connect?'", "/usr/share/logstash/logstash-core/lib/logstash/elasticsearch_client.rb:139:in can_connect?'", "/usr/share/logstash/logstash-core/lib/logstash/config/modules_common.rb:76:in block in pipeline_configs'", "org/jruby/RubyArray.java:1734:in each'", "/usr/share/logstash/logstash-core/lib/logstash/config/modules_common.rb:56:in pipeline_configs'", "/usr/share/logstash/logstash-core/lib/logstash/config/source/modules.rb:16:in pipeline_configs'", "/usr/share/logstash/logstash-core/lib/logstash/config/source_loader.rb:59:in block in fetch'", "org/jruby/RubyArray.java:2481:in collect'", "/usr/share/logstash/logstash-core/lib/logstash/config/source_loader.rb:58:in fetch'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:148:in converge_state_and_update'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:90:in execute'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:362:in block in execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:24:in block in initialize'"]}

[ERROR] 2017-12-13 13:53:10.444 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] agent - An exception happened when converging configuration {:exception=>RuntimeError, :message=>"Could not fetch the configuration, message: Failed to parse the module configuration: [[401] ]", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/agent.rb:155:in converge_state_and_update'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:90:in execute'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:362:in block in execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:24:in block in initialize'"]}

Yeah i found m problem,
Some parameter in command not right in Logstash 6.x

It worked with

/usr/share/logstash/bin/logstash --modules netflow --setup -M "netflow.var.kibana.host=10.1.11.115:5601" -M "netflow.var.kibana.username=username" -M "netflow.var.kibana.password=password" -M "netflow.var.input.udp.port=9996" -M "netflow.var.elasticsearch.hosts=10.1.11.115:9200" -M "netflow.var.elasticsearch.username=username" -M "netflow.var.elasticsearch.password=password"

2 Likes

Got it , thanks for you information hah~

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.