Kibana 4.1.1 not starting


(Will) #1

I'm rebuilding an ELK cluster (3 separate nodes each in AWS for each of logstash, elasticsearch, kibana) with slightly newer versions (the same configs before were working with very little intervention for setup). I'm getting a very generic "unknown error" when trying to start Kibana 4.1.1 (I'd use 4.2.x, but still no package available).
Using the Elastic provided RPMs for all packages ES =2.1.1 (with cloud-aws plugin), logstash 2.0.0.

Using strace, I see the following conversation:
`HEAD / HTTP/1.1
Host: es-host:9200
Connection: keep-alive

HTTP/1.1 200 OK
Content-Type: text/plain; charset=UTF-8
Content-Length: 0

GET /_cluster/health/.kibana?timeout=5s HTTP/1.1
Host: es-host:9200
Connection: keep-alive

HTTP/1.1 408 Request Timeout
Content-Type: application/json; charset=UTF-8
Content-Length: 377

{"cluster_name":"logs","status":"red","timed_out":true,"number_of_nodes":3,"number_of_data_nodes":3,"active_primary_shards":0,"active_shards":0,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":100.0}", 65536) = 477
However, if I do a curl for the same request, I get:# curl -XGET 'es-host:9200/_cluster/health?pretty=true'
{
"cluster_name" : "logs",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 3,
"number_of_data_nodes" : 3,
"active_primary_shards" : 25,
"active_shards" : 50,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}`

With the .kibana index created (with dynamic mapping enabled), I get different results:
# curl -XGET 'es-host:9200/.kibana/_mapping' {".kibana":{"mappings":{}}}

I've had a look through topics like



however, there's no auth / proxy related issue AFAICT (did manually try the patch suggested in https://gist.github.com/anonymous/98d4e597f0f48aa9c524, but that didn't make a difference.


(Will) #2

Full error ("Error" "Error: unknown error" is not very helpful):
{"name":"Kibana","hostname":"XXXX","pid":13803,"level":50,"err":{"message":"unknown error","name":"Error","stack":"Error: unknown error\n at respond (/opt/kibana/src/node_modules/elasticsearch/src/lib/transport.js:237:15)\n at checkRespForFailure (/opt/kibana/src/node_modules/elasticsearch/src/lib/transport.js:203:7)\n at HttpConnector.<anonymous> (/opt/kibana/src/node_modules/elasticsearch/src/lib/connectors/http.js:156:7)\n at IncomingMessage.bound (/opt/kibana/src/node_modules/elasticsearch/node_modules/lodash-node/modern/internals/baseBind.js:56:17)\n at IncomingMessage.emit (events.js:117:20)\n at _stream_readable.js:944:16\n at process._tickCallback (node.js:442:13)"},"msg":"","time":"2016-01-28T22:18:13.568Z","v":0}

goes a little farther if I create the .kibana index, but then I see this in the response (in strace), and dies with same error:
"HTTP/1.1 400 Bad Request\r\nContent-Type: application/json; charset=UTF-8\r\nContent-Length: 424\r\n\r\n{\"error\":{\"root_cause\":[{\"type\":\"search_parse_exception\",\"reason\":\"No mapping found for [buildNum] in order to sort on\"}],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[{\"shard\":0,\"index\":\".kibana\",\"node\":\"vXXs1v-URbOh1yE4ZrB57w\",\"reason\":{\"type\":\"search_parse_exception\",\"reason\":\"No mapping found for [buildNum] in order to sort on\"}}]},\"status\":400}

Not sure why this error shows up in kibana.stdout and not kibana.stderr; shouldn't this error be to the stderr stream?


Install the kibana exception
(Matt Bargar) #3

For some reason it looks like Kibana can't talk to Elasticsearch. Do you have a reverse proxy in front of Kibana or ES? What happens if you remove it? You mentioned you upgraded Kibana, have you checked whether any config options in kibana.yml need updated? These parameters sometimes change. If that doesn't help, could you post the full Kibana console output from when Kibana is trying to start up?


(Will) #4
  • No proxy / reverse-proxy
  • I upgraded the rest of the stack, but due to the official RPMs from the Elastic yum repo not being available for 4.2 / 4.3, the Kibana version has not changed.
  • I removed the config and re-installed the package from yum to compare with the config that I'm pushing. Only difference is a trailing newline, and talking to the ES cluster on port 9200 instead of localhost.
  • The first error in the console output is what's above in my previous post; all that ends up there are these two lines:
    {"name":"Kibana","hostname":"xxxx","pid":13855,"level":50,"err":{"message":"unknown error","name":"Error","stack":"Error: unknown error\n at respond (/opt/kibana/src/node_modules/elasticsearch/src/lib/transport.js:237:15)\n at checkRespForFailure (/opt/kibana/src/node_modules/elasticsearch/src/lib/transport.js:203:7)\n at HttpConnector.<anonymous> (/opt/kibana/src/node_modules/elasticsearch/src/lib/connectors/http.js:156:7)\n at IncomingMessage.bound (/opt/kibana/src/node_modules/elasticsearch/node_modules/lodash-node/modern/internals/baseBind.js:56:17)\n at IncomingMessage.emit (events.js:117:20)\n at _stream_readable.js:944:16\n at process._tickCallback (node.js:442:13)"},"msg":"","time":"2016-01-28T22:48:21.677Z","v":0} {"name":"Kibana","hostname":"xxxx","pid":13855,"level":60,"err":{"message":"unknown error","name":"Error","stack":"Error: unknown error\n at respond (/opt/kibana/src/node_modules/elasticsearch/src/lib/transport.js:237:15)\n at checkRespForFailure (/opt/kibana/src/node_modules/elasticsearch/src/lib/transport.js:203:7)\n at HttpConnector.<anonymous> (/opt/kibana/src/node_modules/elasticsearch/src/lib/connectors/http.js:156:7)\n at IncomingMessage.bound (/opt/kibana/src/node_modules/elasticsearch/node_modules/lodash-node/modern/internals/baseBind.js:56:17)\n at IncomingMessage.emit (events.js:117:20)\n at _stream_readable.js:944:16\n at process._tickCallback (node.js:442:13)"},"msg":"","time":"2016-01-28T22:48:21.679Z","v":0}

(Will) #5

ps - If I pre-create the .kibana index, which I believe shouldn't be necessary (since Elasticsearch allows Kibana to create it), I get the following error instead (this also suggests to me that Kibana can talk to ES):

{"name":"Kibana","hostname":"xxx","pid":18253,"level":60,"err":{"message":{"root_cause":[{"type":"search_parse_exception","reason":"No mapping found for [buildNum] in order to sort on"}],"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":true,"failed_shards":[{"shard":0,"index":".kibana","node":"ZwN1aiQcRsyI3o0MDHDf0A","reason":{"type":"search_parse_exception","reason":"No mapping found for [buildNum] in order to sort on"}}]},"name":"Error","stack":"Error: [object Object]\n at respond (/opt/kibana/src/node_modules/elasticsearch/src/lib/transport.js:235:15)\n at checkRespForFailure (/opt/kibana/src/node_modules/elasticsearch/src/lib/transport.js:203:7)\n at HttpConnector.<anonymous> (/opt/kibana/src/node_modules/elasticsearch/src/lib/connectors/http.js:156:7)\n at IncomingMessage.bound (/opt/kibana/src/node_modules/elasticsearch/node_modules/lodash-node/modern/internals/baseBind.js:56:17)\n at IncomingMessage.emit (events.js:117:20)\n at _stream_readable.js:944:16\n at process._tickCallback (node.js:442:13)"},"msg":"","time":"2016-01-29T17:52:24.883Z","v":0}


(Matt Bargar) #6

You're right, what you're seeing in that second error with the pre-created .kibana index is an error coming back from Elasticsearch. It seems like you're ES cluster may not be in a good state. Is there anything interesting in your ES logs?

Also, as a general tip you might want to enable logging.verbose in kibana.yml in case that gives you any extra info.


(Will) #7

As you can see above, the cluster state is "green" with 3 nodes; additionally, logstash seems to have been able to make indices and is injecting data (using the same es cluster and the configs I was using before).

[running this from the Kibana machine, where 'xx' is the hostname used in the config; I did try specifying a specific backend server]

I will note, though, that I am using auto-discovery via the cloud-AWS plugin; this is how I was doing it before, but don't know if that's relevant here.
# curl -XGET 'xxx:9200/_cat/indices/' green open logstash-2016.01.27 5 1 5354 0 3.9mb 1.9mb green open logstash-2016.01.28 5 1 141272 0 69.4mb 34.6mb green open logstash-2016.01.29 5 1 72769 0 41.8mb 20.8mb

# curl -XGET 'xxx:9200/_cluster/health?pretty' { "cluster_name" : "logs", "status" : "green", "timed_out" : false, "number_of_nodes" : 3, "number_of_data_nodes" : 3, "active_primary_shards" : 15, "active_shards" : 30, "relocating_shards" : 0, "initializing_shards" : 0, "unassigned_shards" : 0, "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 0, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 0, "active_shards_percent_as_number" : 100.0

When I haven't created the .kibana index by hand, it seems to generate this error (in the ES log):
[2016-01-29 10:41:59,388][INFO ][rest.suppressed ] /_cat/indices/.kibana Params: {index=.kibana} [.kibana] IndexNotFoundException[no such index] at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.resolve(IndexNameExpressionResolver.java:566) [snip] at java.lang.Thread.run(Thread.java:745)
which would seem to imply that kibana isn't trying to create the index. When the index exists, I see this in the ES log:
[2016-01-29 09:52:24,889][INFO ][rest.suppressed ] /.kibana/config/_search Params: {index=.kibana, type=config} Failed to execute phase [query], all shards failed; shardFailures {[ZwN1aiQcRsyI3o0MDHDf0A][.kibana][0]: RemoteTransportException[[xxx][x.x.x.x:9300][indices :data/read/search[phase/query]]]; nested: SearchParseException[failed to parse search source [{"size":1000,"sort":[{"buildNum":{"order":"desc"}}],"query":{"filtered":{"filter": {"bool":{"must_not":[{"query":{"match":{"_id":"@@version"}}}]}}}}}]]; nested: SearchParseException[No mapping found for [buildNum] in order to sort on]; }{[f-c5Gsm2Ti6qn7yy_hNZ Lw][.kibana][1]: RemoteTransportException[[xxx][x.x.x.x:9300][indices:data/read/search[phase/query]]]; nested: SearchParseException[failed to parse search sou rce [{"size":1000,"sort":[{"buildNum":{"order":"desc"}}],"query":{"filtered":{"filter":{"bool":{"must_not":[{"query":{"match":{"_id":"@@version"}}}]}}}}}]]; nested: SearchParseException[No mapping found for [buildNum] in order to sort on]; }{[ZwN1aiQcRsyI3o0MDHDf0A][.kibana][2]: RemoteTransportException[[xxx][x.x.x.x:9300][indices:data/read/search[phase/query]]]; nested: SearchParseException[failed to parse search source [{"size":1000,"sort":[{"buildNum":{"order":"desc"}}],"query":{"filtered":{"filter":{"bool":{"must_not":[{"query":{"match":{"_id":"@@version"}}}]}}}}}]]; nested: SearchParseException[No mapping found for [buildNum] in order to sort on]; }{[vXXs1v-URbOh1yE4ZrB57w][.kibana][3]: RemoteTransportException[[xxx][x.x.x.x:9300][indices:data/read/search[phase/query]]]; nested: SearchParseException[failed to parse search source [{"size":1000,"sort":[{"buildNum":{"order":"desc"}}],"query":{"filtered":{"filter":{"bool":{"must_not":[{"query":{"match":{"_id":"@@version"}}}]}}}}}]]; nested: SearchParseException[No mapping found for [buildNum] in order to sort on]; }{[f-c5Gsm2Ti6qn7yy_hNZLw][.kibana][4]: RemoteTransportException[[xxx][x.x.x.x:9300][indices:data/read/search[phase/query]]]; nested: SearchParseException[failed to parse search source [{"size":1000,"sort":[{"buildNum":{"order":"desc"}}],"query":{"filtered":{"filter":{"bool":{"must_not":[{"query":{"match":{"_id":"@@version"}}}]}}}}}]]; nested: SearchParseException[No mapping found for [buildNum] in order to sort on]; } [traceback follows]


(Will) #8

ps - I did try enabling logging.verbose to 'true' per your suggestion, but I get the same output.


(Will) #9

While there's no official RPM (as discussed in another thread) yet, will try testing with newer Kibana to see if the behavior is different or if a more useful error message is given.


(Matt Bargar) #10

Ugh sorry, the versions you listed didn't click with me for some reason. Kibana 4.1.x doesn't support ES 2.x. You can see the compatibility matrix here: https://www.elastic.co/support/matrix#show_compatibility

We're working on publishing new versions to the repos, you can track that progress here: https://github.com/elastic/kibana/issues/4813


(Will) #11

Thanks, yeah, I had been following comments in https://github.com/elastic/kibana/pull/3212

Given that the version # is available trivially from the cluster's endpoint, would it be too much to ask for a useful error message in the case of a version mismatch (i.e., "You're running an unsupported version of Elasticsearch; please consult xxxx")?


(Will) #12

It does appear that a better error is given in later versions, including Kibana 4.1.4 (unfortunately also not available from that repo).


(Matt Bargar) #13

Yeah we actually enforce the ES version in 4.2 and above at least. Sounds like it may have been backported to 4.1.4 as well.

The ticket I linked to is actively being worked, but since the issue is only tagged 4.4.0 I'm not sure if there's a plan to release packages for older versions.

I know it's not ideal, but if you're in a pinch it is possible to build packages yourself: https://github.com/elastic/kibana/blob/master/CONTRIBUTING.md#building-os-packages


(system) #14