X-pack logstash not showing in Kibana

Using a remote elasticsearch monitoring cluster for x-pack. xpack enabled on logstash node. Elasticsearch and kibana are showing in monitoring fine. The logstash index is created in the remote monitoring elasticsearch. I see no errors in logs but the monitoring tab in kibana does not show logstash

All elastic stack modules are 6.3.

The index is getting created but still not visible from Kibana. I am able to change and manage uuid and node name, Any Ideas?
{
......
"ephemeral_id": "6a5f16c3-b415-4c1b-bc30-9ee44a352d79",
"host": "cloudcontrol-logstash",
"uuid": "ffotitest123456",
"http_address": "127.0.0.1:9602",
"name": "metricbeat_5401",
"version": "6.3.0",
"snapshot": false,
"status": "green"
},
"events": {
"in": 121,
"filtered": 121,
"out": 121,
"duration_in_millis": 2161
}
}
},
"sort": [
1541767370643
]
}

Does X-Pack management need to be enabled?. Here is the logstash.yml:

------------ X-Pack Settings (not applicable for OSS build)--------------

X-Pack Monitoring

https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html

xpack.monitoring.enabled: true
#xpack.monitoring.elasticsearch.username: logstash_system
#xpack.monitoring.elasticsearch.password: password
xpack.monitoring.elasticsearch.url: ["http://10.200.2.163:9200"]
#xpack.monitoring.elasticsearch.ssl.ca: [ "/path/to/ca.crt" ]
#xpack.monitoring.elasticsearch.ssl.truststore.path: path/to/file
#xpack.monitoring.elasticsearch.ssl.truststore.password: password
#xpack.monitoring.elasticsearch.ssl.keystore.path: /path/to/file
#xpack.monitoring.elasticsearch.ssl.keystore.password: password
#xpack.monitoring.elasticsearch.ssl.verification_mode: certificate
#xpack.monitoring.elasticsearch.sniffing: false
xpack.monitoring.collection.interval: 10s
xpack.monitoring.collection.pipeline.details.enabled: true

X-Pack Management

https://www.elastic.co/guide/en/logstash/current/logstash-centralized-pipeline-management.html

#xpack.management.enabled: true
#xpack.management.pipeline.id: ["main", "apache_logs"]
#xpack.management.elasticsearch.username: logstash_admin_user
#xpack.management.elasticsearch.password: password
#xpack.management.elasticsearch.url: ["https://10.200.2.163:9200"]
#xpack.management.elasticsearch.ssl.ca: [ "/path/to/ca.crt" ]
#xpack.management.elasticsearch.ssl.truststore.path: /path/to/file
#xpack.management.elasticsearch.ssl.truststore.password: password
#xpack.management.elasticsearch.ssl.keystore.path: /path/to/file
#xpack.management.elasticsearch.ssl.keystore.password: password
#xpack.management.elasticsearch.sniffing: false
#xpack.management.logstash.poll_interval: 5s

You do not need xpack.management.* to be enabled in logstash.yml for Logstash X-Pack Monitoring to work.

Your xpack.monitoring.* settings look okay. A couple of follow up questions:

  1. When you start up Logstash, do you see any error messages in the log? Would it be possible for you to paste this log here after masking any sensitive details in it?

  2. Do you have X-Pack Security enabled? If you do, then you'll need to uncomment the xpack.monitoring.elasticsearch.username and xpack.monitoring.elasticsearch.password settings in your logstash.yml as well and set them to appropriate values. More on that here: https://www.elastic.co/guide/en/logstash/6.x/configuring-logstash.html.

Thanks,

Shaunak

No X-Pack security enabled yet.
Logstash restarted and here is the log.
The index does get sent to the remote monitoring elasticsearch.

Thanks

[2018-11-09T15:01:32,392][INFO ][logstash.pipeline ] Pipeline has terminated {:pipeline_id=>".monitoring-logstash", :thread=>"#<Thread:0x2127f0ab run>"}
[2018-11-09T15:01:34,150][INFO ][logstash.pipeline ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#<Thread:0x11bb32b3@/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:48 run>"}
[2018-11-09T15:04:06,305][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-11-09T15:04:11,649][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.3.0"}
[2018-11-09T15:04:30,028][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch hosts=>[http://10.200.2.163:9200], bulk_path=>"/_xpack/monitoring/_bulk?system_id=logstash&system_api_version=2&interval=1s", manage_template=>false, document_type=>"%{[@metadata][document_type]}", sniffing=>false, id=>"8391c1b4dab58805d1cb82dade9703aad5b9a4f5307bde6ec18ac2e129147a81", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_44e6158d-0697-40fc-b5af-a28068af3c3b", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, action=>"index", ssl_certificate_verification=>true, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2018-11-09T15:04:30,135][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-11-09T15:04:30,137][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50}
[2018-11-09T15:04:34,174][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>, :added=>[http://10.200.2.163:9200/]}}
[2018-11-09T15:04:34,173][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>, :added=>[http://cloudcontrol-elasticsearch1.office.re.local:9200/]}}
[2018-11-09T15:04:34,190][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://10.200.2.163:9200/, :path=>"/"}
[2018-11-09T15:04:34,194][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://cloudcontrol-elasticsearch1.office.re.local:9200/, :path=>"/"}
[2018-11-09T15:04:35,637][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://cloudcontrol-elasticsearch1.office.re.local:9200/"}
[2018-11-09T15:04:35,642][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://10.200.2.163:9200/"}
[2018-11-09T15:04:36,334][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-11-09T15:04:36,335][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-11-09T15:04:36,336][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>6}
[2018-11-09T15:04:36,341][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>6}
[2018-11-09T15:04:36,366][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-11-09T15:04:36,377][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://10.200.2.163:9200"]}
[2018-11-09T15:04:36,409][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"default"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}

[2018-11-09T15:04:37,534][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//cloudcontrol-elasticsearch1.office.re.local:9200"]}
[2018-11-09T15:04:37,712][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>, :added=>[http://10.200.2.163:9200/]}}
[2018-11-09T15:04:37,716][INFO ][logstash.licensechecker.licensereader] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://10.200.2.163:9200/, :path=>"/"}
[2018-11-09T15:04:38,368][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://10.200.2.163:9200/"}
[2018-11-09T15:04:38,374][INFO ][logstash.licensechecker.licensereader] ES Output version determined {:es_version=>6}
[2018-11-09T15:04:38,375][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>6}
[2018-11-09T15:04:40,146][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>".monitoring-logstash", :thread=>"#<Thread:0x72d31106 run>"}
[2018-11-09T15:04:41,072][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5041"}
[2018-11-09T15:04:41,095][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x2ff31da6@/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:48 sleep>"}
[2018-11-09T15:04:41,699][INFO ][org.logstash.beats.Server] Starting server on port: 5041
[2018-11-09T15:04:41,702][INFO ][logstash.agent ] Pipelines running {:count=>2, :running_pipelines=>[:main, :".monitoring-logstash"], :non_running_pipelines=>}
[2018-11-09T15:04:41,743][INFO ][logstash.inputs.metrics ] Monitoring License OK
[2018-11-09T15:04:45,235][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9602}

Thanks, logs look okay too. Can you run the following ES query on your remote monitoring Elasticsearch cluster and report the results here?

POST .monitoring-logstash-*/_search
{
  "size": 0,
  "aggs": {
    "type": {
      "terms": {
        "field": "type",
        "size": 10
      },
      "aggs": {
        "cluster_uuid": {
          "terms": {
            "field": "cluster_uuid",
            "size": 10
          }
        }
      }
    }
  }
}
1 Like

I had a similar issue with my Logstash nodes not showing up in monitoring. Turns out it was a formatting issue in the logstash yaml indentation. All settings before xpack were one space from beginning of line, column 1, xpack settings started at beginning of line, column 0. After changing the formatting so all indentation matched and restarting Logstash the node appeared in monitoring.

2 Likes

Logstash is creating the index docs which appear correct. I am using the remote elasticsearch monitoring cluster. I did manually adjust the URL to include: "cluster_uuid": "eCSSS5C_TsuIpqFn9LLaBg"

Still stuck on this.

http://10.200.2.161:5601/app/monitoring#/logstash?_g=(cluster_uuid:'eCSSS5C_TsuIpqFn9LLaBg')

{
"_index": ".monitoring-logstash-6-2018.11.09",
"_type": "doc",
"_id": "i4Pq-mYBkjQ9k5j1-Gwh",
"_score": null,
"_source": {
"cluster_uuid": "eCSSS5C_TsuIpqFn9LLaBg",
"timestamp": "2018-11-09T23:59:54.908Z",
"interval_ms": 1000,
"type": "logstash_state",
"source_node": {
"uuid": "zrr42fh5RO6nuLkSKz8Ayw",
"host": "10.200.2.163",
"transport_address": "10.200.2.163:9300",
"ip": "10.200.2.163",
"name": "node-1",
"timestamp": "2018-11-09T23:59:54.908Z"
},

Added X-Pack to the monitoring cluster (trial) and appears to be working correctly now. Not sure why that was required but moving forward now.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.