"null_pointer_exception" encountered in latest ES/Marvel

I have just installed latest Elasticsearch 2.0.0, Kibana-4.2.1, and then downloaded the latest Marvel plugin. Even put following line in elasticsearch.yml ( is this needed?)
echo 'marvel.agent.enabled: false' >> ./config/elasticsearch.yml

However, I kept on seeing following exception in Kibana when clicking anything in Marvel UI. I did not see anything like this when I used the package from training class. BTW: this is freshly built cluster.

Thanks in advance
Tao

error [13:39:19.962] [null_pointer_exception] null :: {"path":"/.marvel-es-*/_field_stats","query":{"level":"indices","ignore_unavailable":true},"body":"{"fields":["timestamp"],"index_constraints":{"timestamp":{"max_value":{"gte":"2015-11-24T20:39:12.312Z"},"min_value":{"lte":"2015-11-24T21:39:12.312Z"}}}}","statusCode":500,"response":"{"error":{"root_cause":[{"type":"null_pointer_exception","reason":null}],"type":"null_pointer_exception","reason":null},"status":500}"}
at respond (/home/nutanix/kibana-4.2.1-linux-x64/node_modules/elasticsearch/src/lib/transport.js:238:15)
at checkRespForFailure (/home/nutanix/kibana-4.2.1-linux-x64/node_modules/elasticsearch/src/lib/transport.js:201:7)
at HttpConnector. (/home/nutanix/kibana-4.2.1-linux-x64/node_modules/elasticsearch/src/lib/connectors/http.js:155:7)
at IncomingMessage.wrapper (/home/nutanix/kibana-4.2.1-linux-x64/node_modules/lodash/index.js:3095:19)
at IncomingMessage.emit (events.js:129:20)
at _stream_readable.js:908:16
at process._tickDomainCallback (node.js:381:11)

Hi,

marvel.agent.enabled is a setting from Marvel 1.x and is not supported in Marvel 2. The equivalent is marvel.enabled: true|false. This setting is needed if you want to disable Marvel which is not your case.

We just released a bunch of new versions that fix some bugs (ES 2.1.0, Kibana 4.3.0 and Marvel 2.1.0)... Can you try with these?

-- Tanguy

I 've the same problem tanguy

error [04:49:47.250] [null_pointer_exception] null :: {"path":"/.marvel-es-*/_field_stats","query":{"level":"indices","ignore_unavailable":true},"body":"{"fields":["timestamp"],"index_constraints":{"timestamp":{"max_value":{"gte":"2015-12-03T03:49:47.068Z"},"min_value":{"lte":"2015-12-03T04:49:47.068Z"}}}}","statusCode":500,"response":"{"error":{"root_cause":[{"type":"null_pointer_exception","reason":null}],"type":"null_pointer_exception","reason":null},"status":500}"}
at respond (/data/www/kibana/node_modules/elasticsearch/src/lib/transport.js:238:15)
at checkRespForFailure (/data/www/kibana/node_modules/elasticsearch/src/lib/transport.js:201:7)
at HttpConnector. (/data/www/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:155:7)
at IncomingMessage.wrapper (/data/www/kibana/node_modules/lodash/index.js:3095:19)
at IncomingMessage.emit (events.js:129:20)
at _stream_readable.js:908:16
at process._tickDomainCallback (node.js:381:11)

i probe that and does'n t works

Concerning the null pointer exception see my comment here Problems with apache and marvel plugin

Hi tanguy

I have installed the last software on a new cluster, but I am getting new issues:

  1. Without Marvel, I saw this message:
    log [16:53:47.173] [error][status][plugin:elasticsearch] Status changed from yellow to red - Elasticsearch is still initializing the kibana index... Trying again in 2.5 second.

deleting all the indices does not help.

  1. After install Marvel, I am getting these messages.
    log [16:50:55.862] [error][status][plugin:elasticsearch] Status changed from yellow to red - Waiting for Kibana index ".kibana" to come online failed.
    log [16:50:58.381] [error][status][plugin:elasticsearch] Status changed from red to red - Elasticsearch is still initializing the kibana index... Trying again in 2.5 second.
    error [16:51:02.531] [search_phase_execution_exception] all shards failed :: {"path":"/.marvel-es-data/cluster_info/_search","query":{},"body":"{"size":10000}","statusCode":503,"response":"{"error":{"root_cause":[],"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query_fetch","grouped":true,"failed_shards":[]},"status":503}"}
    at respond (/home/nutanix/es/kibana-4.3.0-linux-x64/node_modules/elasticsearch/src/lib/transport.js:238:15)
    at checkRespForFailure (/home/nutanix/es/kibana-4.3.0-linux-x64/node_modules/elasticsearch/src/lib/transport.js:201:7)
    at HttpConnector. (/home/nutanix/es/kibana-4.3.0-linux-x64/node_modules/elasticsearch/src/lib/connectors/http.js:155:7)
    at IncomingMessage.wrapper (/home/nutanix/es/kibana-4.3.0-linux-x64/node_modules/lodash/index.js:3095:19)
    at IncomingMessage.emit (events.js:129:20)
    at _stream_readable.js:908:16
    at process._tickDomainCallback (node.js:381:11)

[nutanix@ES-POC-VM-6 es]$ curl http://10.4.78.15:9200/_cat/indices?v
health status index pri rep docs.count docs.deleted store.size pri.store.size
red open .marvel-es-2015.12.04 1 1
red open .marvel-es-data 1 1
red open nutanix-17459-2015.12.04 5 1
red open .kibana 1 1
red open nutanix-17459-2015.12.03 5 1
[nutanix@ES-POC-VM-6 es]$ curl http://10.4.78.15:9200/nutanix*/_count?pretty
{
"error" : {
"root_cause" : [ ],
"type" : "search_phase_execution_exception",
"reason" : "all shards failed",
"phase" : "query",
"grouped" : true,
"failed_shards" : [ ]
},
"status" : 503
}

[ES-POC-VM-6 es]$ java -version
java version "1.8.0_65"
Java(TM) SE Runtime Environment (build 1.8.0_65-b17)
Java HotSpot(TM) 64-Bit Server VM (build 25.65-b01, mixed mode)

Thanks
Tao

@tguan : your indices all have a red status, meaning that they cannot be queried. You should investigate why they are in that state.

/_cat/shards?v will give you more information.

You can also try to use the cluster reroute API with the explain parameter to assign marvel indices: it will tell you why the shard can't be assigned

Thanks tanguy! I am amazed by how responsive you and Elasticsearch team are!

I downgraded the software and hit the same issue. Eventually, I figured out that it was due to my dumb mistake in my script, which sets "node.data" to False on a supposedly data node. :frowning:

Will try out the latest package.

Regards
Tao