Logstash monitoring is not shown in kibana monitoring UI

[2019-05-21T17:26:21,810][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2019-05-21T17:26:21,914][DEBUG][logstash.instrument.periodicpoller.persistentqueue] Starting {:polling_interval=>5, :polling_timeout=>120}
[2019-05-21T17:26:21,952][DEBUG][logstash.instrument.periodicpoller.deadletterqueue] Starting {:polling_interval=>5, :polling_timeout=>120}
[2019-05-21T17:26:22,093][DEBUG][logstash.agent           ] Starting agent
[2019-05-21T17:26:22,226][DEBUG][logstash.config.source.local.configpathloader] Skipping the following files while reading config since they don't match the specified glob pattern {:files=>[]}
[2019-05-21T17:26:22,235][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/home/siddaram094/sample-log.conf"}
[2019-05-21T17:26:22,279][DEBUG][logstash.config.pipelineconfig] -------- Logstash Config ---------
[2019-05-21T17:26:22,280][DEBUG][logstash.config.pipelineconfig] Config from source {:source=>LogStash::Config::Source::Local, :pipeline_id=>:main}
[2019-05-21T17:26:22,286][DEBUG][logstash.config.pipelineconfig] Config string {:protocol=>"file", :id=>"/home/siddaram094/sample-log.conf"}
[2019-05-21T17:26:22,287][DEBUG][logstash.config.pipelineconfig] 


[2019-05-21T17:26:22,369][DEBUG][logstash.agent           ] Converging pipelines state {:actions_count=>1}
[2019-05-21T17:26:22,395][DEBUG][logstash.agent           ] Executing action {:action=>LogStash::PipelineAction::Create/pipeline_id:main}
[2019-05-21T17:26:26,936][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2019-05-21T17:26:26,944][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2019-05-21T17:26:31,193][DEBUG][logstash.plugins.registry] On demand adding plugin to the registry {:name=>"file", :type=>"input", :class=>LogStash::Inputs::File}
[2019-05-21T17:26:31,496][DEBUG][logstash.plugins.registry] On demand adding plugin to the registry {:name=>"plain", :type=>"codec", :class=>LogStash::Codecs::Plain}
[2019-05-21T17:26:31,543][DEBUG][logstash.codecs.plain    ] config LogStash::Codecs::Plain/@id = "plain_a88afc6a-75b6-4be9-9847-5e3844b42990"
[2019-05-21T17:26:31,550][DEBUG][logstash.codecs.plain    ] config LogStash::Codecs::Plain/@enable_metric = true
[2019-05-21T17:26:31,550][DEBUG][logstash.codecs.plain    ] config LogStash::Codecs::Plain/@charset = "UTF-8"
[2019-05-21T17:26:31,630][DEBUG][logstash.inputs.file     ] config LogStash::Inputs::File/@path = ["/var/log/apache2/access.log"]
[2019-05-21T17:26:31,631][DEBUG][logstash.inputs.file     ] config LogStash::Inputs::File/@id = "2f1ab82b2cd010f7f37d3da056dae283b8b342750c350d5e2f5999092e8188fe"
[2019-05-21T17:26:31,631][DEBUG][logstash.inputs.file     ] config LogStash::Inputs::File/@enable_metric = true
[2019-05-21T17:26:31,646][DEBUG][logstash.inputs.file     ] config LogStash::Inputs::File/@codec = <LogStash::Codecs::Plain id=>"plain_a88afc6a-75b6-4be9-9847-5e3844b42990", enable_metric=>true, charset=>"UTF-8">
[2019-05-21T17:26:31,651][DEBUG][logstash.inputs.file     ] config LogStash::Inputs::File/@add_field = {}
[2019-05-21T17:26:31,653][DEBUG][logstash.inputs.file     ] config LogStash::Inputs::File/@stat_interval = 1.0
[2019-05-21T17:26:31,661][DEBUG][logstash.inputs.file     ] config LogStash::Inputs::File/@discover_interval = 15
[2019-05-21T17:26:31,663][DEBUG][logstash.inputs.file     ] config LogStash::Inputs::File/@sincedb_write_interval = 15.0
[2019-05-21T17:26:31,664][DEBUG][logstash.inputs.file     ] config LogStash::Inputs::File/@start_position = "end"
[2019-05-21T17:26:31,664][DEBUG][logstash.inputs.file     ] config LogStash::Inputs::File/@delimiter = "\n"
[2019-05-21T17:26:31,664][DEBUG][logstash.inputs.file     ] config LogStash::Inputs::File/@close_older = 3600.0
[2019-05-21T17:26:31,664][DEBUG][logstash.inputs.file     ] config LogStash::Inputs::File/@mode = "tail"
[2019-05-21T17:26:31,664][DEBUG][logstash.inputs.file     ] config
LogStash::Inputs::File/@file_completed_action = "delete"
    [2019-05-21T17:26:31,665][DEBUG][logstash.inputs.file     ] config LogStash::Inputs::File/@sincedb_clean_after = 1209600.0
    [2019-05-21T17:26:31,665][DEBUG][logstash.inputs.file     ] config LogStash::Inputs::File/@file_chunk_size = 32768
    [2019-05-21T17:26:31,665][DEBUG][logstash.inputs.file     ] config LogStash::Inputs::File/@file_chunk_count = 140737488355327
    [2019-05-21T17:26:31,667][DEBUG][logstash.inputs.file     ] config LogStash::Inputs::File/@file_sort_by = "last_modified"
    [2019-05-21T17:26:31,667][DEBUG][logstash.inputs.file     ] config LogStash::Inputs::File/@file_sort_direction = "asc"
    [2019-05-21T17:26:31,737][DEBUG][logstash.plugins.registry] On demand adding plugin to the registry {:name=>"grok", :type=>"filter", :class=>LogStash::Filters::Grok}
    [2019-05-21T17:26:31,778][DEBUG][logstash.filters.grok    ] config LogStash::Filters::Grok/@match = {"message"=>"%{IPORHOST:ip} %{USER:user} %{USER:ident} \\[%{HTTPDATE:time}\\] \\\"%{WORD:httpmethod} / HTTP\\/%{NUMBER:httpVer}\\\" %{NUMBER:statuscode} %{NUMBER:resbytes:int} %{QS:data} %{QS:uagent}"}
    [2019-05-21T17:26:31,779][DEBUG][logstash.filters.grok    ] config LogStash::Filters::Grok/@id = "2a8c00bb9394e6dc339b98ee381b2ee5fce67e09c9ec1e6012fc81e8cca30239"
    [2019-05-21T17:26:31,779][DEBUG][logstash.filters.grok    ] config LogStash::Filters::Grok/@enable_metric = true
    [2019-05-21T17:26:31,779][DEBUG][logstash.filters.grok    ] config LogStash::Filters::Grok/@add_tag = []
    [2019-05-21T17:26:31,779][DEBUG][logstash.filters.grok    ] config LogStash::Filters::Grok/@remove_tag = []
    [2019-05-21T17:26:31,780][DEBUG][logstash.filters.grok    ] config LogStash::Filters::Grok/@add_field = {}
    [2019-05-21T17:26:31,780][DEBUG][logstash.filters.grok    ] config LogStash::Filters::Grok/@remove_field = []
    [2019-05-21T17:26:31,780][DEBUG][logstash.filters.grok    ] config LogStash::Filters::Grok/@periodic_flush = false
    [2019-05-21T17:26:31,780][DEBUG][logstash.filters.grok    ] config LogStash::Filters::Grok/@patterns_dir = []
    [2019-05-21T17:26:31,780][DEBUG][logstash.filters.grok    ] config LogStash::Filters::Grok/@pattern_definitions = {}
    [2019-05-21T17:26:31,781][DEBUG][logstash.filters.grok    ] config LogStash::Filters::Grok/@patterns_files_glob = "*"
    [2019-05-21T17:26:31,781][DEBUG][logstash.filters.grok    ] config LogStash::Filters::Grok/@break_on_match = true
    [2019-05-21T17:26:31,781][DEBUG][logstash.filters.grok    ] config LogStash::Filters::Grok/@named_captures_only = true
    [2019-05-21T17:26:31,785][DEBUG][logstash.filters.grok    ] config LogStash::Filters::Grok/@keep_empty_captures = false
    [2019-05-21T17:26:31,785][DEBUG][logstash.filters.grok    ] config LogStash::Filters::Grok/@tag_on_failure = ["_grokparsefailure"]
    [2019-05-21T17:26:31,785][DEBUG][logstash.filters.grok    ] config LogStash::Filters::Grok/@timeout_millis = 30000
    [2019-05-21T17:26:31,785][DEBUG][logstash.filters.grok    ] config LogStash::Filters::Grok/@tag_on_timeout = "_groktimeout"
    [2019-05-21T17:26:31,785][DEBUG][logstash.filters.grok    ] config LogStash::Filters::Grok/@overwrite = []
    [2019-05-21T17:26:31,831][DEBUG][logstash.plugins.registry] On demand adding plugin to the registry {:name=>"geoip", :type=>"filter", :class=>LogStash::Filters::GeoIP}
    [2019-05-21T17:26:31,866][DEBUG][logstash.filters.geoip   ] config LogStash::Filters::GeoIP/@source = "ip"
    [2019-05-21T17:26:31,869][DEBUG][logstash.filters.geoip   ] config LogStash::Filters::GeoIP/@target = "clientip"
    [2019-05-21T17:26:31,869][DEBUG][logstash.filters.geoip   ] config LogStash::Filters::GeoIP/@id =

This is claiming monitoring is disabled.

Can you double check your config for this again?

below is the configuration

xpack.license.self_generated.type: trial
xpack.security.enabled: true
xpack.watcher.enabled: true
xpack.monitoring.enabled: true
xpack.monitoring.collection.enabled: true
xpack.monitoring.elasticsearch.username: logstash-system
xpack.monitoring.elasticsearch.password: elasticlogstash
xpack.monitoring.elasticsearch.hosts: ["http://x.x.x.x:9200"]

You might need to double check where you're looking. When you start logstash in debug mode (sudo bin/logstash --path.settings /etc/logstash/ -f /home/siddaram094/sample-log.conf --debug --config.debug), it shows that the xpack.monitoring.enabled: false is the configuration.

Maybe @ycombinator has some thoughts

Pinging @shaunak instead of @ycombinator

There are a couple of things you need to fix in your Your /etc/logstash/logstash.yml file:

  1. YAML is very sensitive to leading whitespace. Your /etc/logstash/logstash.yml file contains leading whitespace characters on every line that's not commented out. Please remove all of these leading whitespace characters and try again.

  2. As @chrisronline has pointed out earlier, you'll want to remove the following settings from your /etc/logstash/logstash.yml file:

    xpack.license.self_generated.type
    xpack.security.enabled
    xpack.watcher.enabled
    xpack.monitoring.collection.enabled

    These are Elasticsearch settings, not Logstash settings. Using them in a Logstash configuration file will result in errors like this when you run Logstash:

    An unexpected error occurred! {:error=>#<ArgumentError: Setting "xpack.license.self_generated.type" hasn't been registered
    

Shaunak

@shaunak

  1. I have removed all the leading white spaces from/etc/lohstash/logstash.yml the logstash is working fine

  2. I want to monitor the logstash. I am facing difficulties while setting the monitoring settings for logstash.
    in the documentation i have read we need to configure the below settings to configure x-pack monitoring for logstash.
    xpack.license.self_generated.type
    xpack.security.enabled
    xpack.watcher.enabled
    xpack.monitoring.collection.enabled

i am getting below error when i tried to set up the logstash monitoring

Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2019-05-29T06:07:47,402][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-05-29T06:07:47,446][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.0.1"}
[2019-05-29T06:07:52,268][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Got response code '401' contacting Elasticsearch at URL 'http://10.160.0.6:9200/_xpack'"}
[2019-05-29T06:07:52,389][ERROR][logstash.monitoring.internalpipelinesource] Failed to fetch X-Pack information from Elasticsearch. This is likely due to failure to reach a live Elasticsearch cluster.

As the error messages shows, you're getting a 401 from Elasticsearch. This means Elasticsearch running at http://10.160.0.6:9200 is expecting authentication credentials.

In the logstash.yml you posted earlier you had this setting:

xpack.monitoring.elasticsearch.hosts: ["http://10.160.0.5:9200"]

So I'm not sure why your Logstash is trying to talk to http://10.160.0.6:9200 instead now. Did you change something in your configuration?

I have changed the settings.
What authentication credentials I need to specify? Is it xpack.monitoring.elasticaearch.username:"logstash_system"
xpack.monitoring.elasticsearch.password:"elasticlogstash"

@siddaram_kj It just needs to be set to a valid username and password with the proper permissions on your Elasticsearch cluster. To see valid users and to modify their password(s) you can use the elasticsearch-users command.

Verify that the xpack.monitoring.collection.enabled setting is true on the ES cluster (elasticsearch.yml). If that setting is false , the collection of monitoring data is disabled in Elasticsearch and data is ignored from all other sources.

@ylasri

xpack.monitoring.collection.enabled is set to true on the elasticsearch cluster. But still the logstash monitoring is not captured.
getting the below error when i start the logstash

[2019-06-06T05:59:09,417][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Got response code '401' contacting Elasticsearch at URL 'http://localhost:9200/_xpack'"}

This often indicates that your Elasticsearch cluster is running the OSS distribution and not the default one that is required for Logstash monitoring to work. What is the output of curl http://10.160.0.5:9200?

@Christian_Dahlqvist

when i run this curl http://10.160.0.5:9200

i am getting below output

{"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}

How have you secured the cluster? What do you get for that request with valid credentials?

yaa i have secured the cluster.

with valid credentials if i query with this command curl -u elastic:elastic http://10.160.0.5:9200

i got the below output

{
"name" : "es1",
"cluster_name" : "sidducluster",
"cluster_uuid" : "ZeZRh8stQACPbhb0tAYGLQ",
"version" : {
"number" : "7.0.1",
"build_flavor" : "default",
"build_type" : "deb",
"build_hash" : "e4efcb5",
"build_date" : "2019-04-29T12:56:03.145736Z",
"build_snapshot" : false,
"lucene_version" : "8.0.0",
"minimum_wire_compatibility_version" : "6.7.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}

What is the output of curl http://10.160.0.5:9200/_license API with appropriate credentials?

i have passed the credentilas along with the above command and got the below output

{
"license" : {
"status" : "active",
"uid" : "93c6fa6a-ec7f-4056-9003-eca96b64d961",
"type" : "trial",
"issue_date" : "2019-05-10T04:16:49.941Z",
"issue_date_in_millis" : 1557461809941,
"expiry_date" : "2019-06-09T04:16:49.941Z",
"expiry_date_in_millis" : 1560053809941,
"max_nodes" : 1000,
"issued_to" : "sidducluster",
"issuer" : "elasticsearch",
"start_date_in_millis" : -1
}
}

@Christian_Dahlqvist @chrisronline
can i get any update on this ?