Kibana throwing 404 when trying to view Logstash Node

I have monitoring configured in logstash and Kibana and i get the overview of Logstash all good.
But as soon as i hit Logstash nodes link on Monitoring tab. I get a 404. I have the kibana log that error for me. But need help understanding this error.

{
	"type": "request",
	"@timestamp": "2017-08-01T19:00:18Z",
	"tags": ["monitoring-ui",
	"error"],
	"pid": 12430,
	"level": "error",
	"message": "Not Found",
	"error": {
		"message": "Not Found",
		"name": "Error",
		"stack": "Not Found :: {\"path\":\"/.monitoring-data-2/logstash/ef34eee7-270b-4633-a27b-db79bd7f21ce\",\"query\":{\"_source\":\"timestamp,logstash.process.cpu.percent,logstash.jvm.mem.heap_used_percent,logstash.jvm.uptime_in_millis,logstash.events.out,logstash.logstash.http_address,logstash.logstash.name,logstash.logstash.host,logstash.logstash.uuid,logstash.logstash.status,logstash.logstash.version,logstash.logstash.pipeline,logstash.reloads\"},\"statusCode\":404,\"response\":\"{\\\"_index\\\":\\\".monitoring-data-2\\\",\\\"_type\\\":\\\"logstash\\\",\\\"_id\\\":\\\"ef34eee7-270b-4633-a27b-db79bd7f21ce\\\",\\\"found\\\":false}\"}\n    at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:295:15)\n    at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:254:7)\n    at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:157:7)\n    at IncomingMessage.bound (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/dist/lodash.js:729:21)\n    at emitNone (events.js:91:20)\n    at IncomingMessage.emit (events.js:185:7)\n    at endReadableNT (_stream_readable.js:974:12)\n    at _combinedTickCallback (internal/process/next_tick.js:80:11)\n    at process._tickDomainCallback (internal/process/next_tick.js:128:9)"
	}
}

Is this happening for any Logstash node or just a specific one?

What version of the Elastic stack are you using?

Using Logstash 5.5.0 and Elastic & Kibana at 5.4.2.

So monitoring data from logstash is on .monitoring-logstash-2-* index and Kibana is generating query for .monitoring-data-2-* index. All other links on Monitoring Tab for Kibana is working fine.

So is this because of version mismatch. Is there a way i can change the where the monitoring info for logstash is getting indexed to.. or is there a way to tell kibana to look at .monitoring-logstash index rather than .monitoring-data index.

Running differing versions of the product isn't recommended because it can cause issues like the one you are seeing. I would suggest upgrading Kibana (and any other parts of your Elastic stack) to 5.5.0 as well.

Shaunak

above page says ES/Kibana 5.4x with Logstash 5.5x. am i reading it wrong?

there is a note too..

  • We recommend running the latest version of Beats, Logstash, and ES-Hadoop; earlier versions will work with reduced functionality.

and hence logstash was updated to latest, our infra team requires additional effort to update ES and Kibana and xpack and custom realms and maybe other breaking changes.

Upgrading logstash is very easy, hence i was thinking if there is a hack in Kibana, it would be really helpful. I am ready to add extra configurations in logstash to make this happen.

Thanks

@Jathin

You are reading that correctly. However, we had to make a breaking change (it is documented in the breaking changes list for X-Pack monitoring 5.5) to the monitoring schema prior to 6.0 because we are removing support for multiple _types. The new schema is ready for 6.x and it's far more efficient.

A good general rule of thumb is that, even ignoring breaking changes (which should be rare), it is always better to have the Monitoring cluster be the same or newer version of Elasticsearch than the monitored stack. Getting ahead of it is generally going to be okay, but there's always the risk of pitfalls like this in doing so.

This change went into X-Pack Monitoring 5.5+ and Elasticsearch nor Kibana recognize the .monitoring-data-2 index in 5.5+. In Logstash 5.5+, we no longer send the data that used to be routed to that index because it expects the monitoring cluster to be the same or newer version.

You could hack together a document so that the page can be displayed, if that's your motivation.

GET /.monitoring-logstash-*/_search
{
  "query": {
    "bool": {
      "must": [
         { "term": { "logstash_stats.logstash.uuid": "ef34eee7-270b-4633-a27b-db79bd7f21ce" } }
      ]
    }
  },
  "sort": {
    "timestamp": { "order": "desc" }
  }
}

Then, from the document that it returns, copy the _source and replace logstash_stats with logstash.

PUT /.monitoring-data-2/logstash/ef34eee7-270b-4633-a27b-db79bd7f21ce
{
  ...
}

This will provide a stale view of the summary bar for that instance of Logstash, but it will have live charts that are relevant.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.