Marvel dashboard won't show shards infos

Elasticsearch 2.1.0 + Kibana 4.3.0 + latest marvel plugins

Sense give me this:

GET _cat/shards?v

index shard prirep state docs store ip node
.marvel-es-2015.11.25 0 r STARTED 1556 677kb Charcoal
.marvel-es-2015.11.25 0 p STARTED 1556 756.8kb MN-E (Ultraverse)
filebeat-2015.11.25 1 r STARTED 23887 6.1mb Charcoal
filebeat-2015.11.25 1 p STARTED 23875 6mb Scanner
filebeat-2015.11.25 4 r STARTED 23656 6mb Charcoal
filebeat-2015.11.25 4 p STARTED 23633 6mb Scanner
filebeat-2015.11.25 3 p STARTED 23372 5.9mb Charcoal
filebeat-2015.11.25 3 r STARTED 23409 6mb MN-E (Ultraverse)
filebeat-2015.11.25 2 r STARTED 23736 6mb Scanner
filebeat-2015.11.25 2 p STARTED 23725 6.1mb MN-E (Ultraverse)
filebeat-2015.11.25 0 p STARTED 23379 5.9mb Charcoal
filebeat-2015.11.25 0 r STARTED 23398 6mb MN-E (Ultraverse)
topbeat-2015.11.25 1 p STARTED 9175 2.1mb Charcoal
topbeat-2015.11.25 1 r STARTED 9175 4.3mb Scanner
topbeat-2015.11.25 4 p STARTED 8962 2.1mb Charcoal
topbeat-2015.11.25 4 r STARTED 8962 2.1mb MN-E (Ultraverse)
topbeat-2015.11.25 3 r STARTED 9099 2mb Charcoal
topbeat-2015.11.25 3 p STARTED 9099 4.2mb MN-E (Ultraverse)
topbeat-2015.11.25 2 p STARTED 8963 2.1mb Scanner
topbeat-2015.11.25 2 r STARTED 8965 2.1mb MN-E (Ultraverse)
topbeat-2015.11.25 0 r STARTED 8997 2.1mb Scanner
topbeat-2015.11.25 0 p STARTED 8997 2mb MN-E (Ultraverse)
.kibana 0 r STARTED 4 30.9kb Charcoal
.kibana 0 p STARTED 4 30.9kb Scanner
.marvel-es-data 0 p STARTED 4 3.9kb Scanner
.marvel-es-data 0 r STARTED 4 3.9kb MN-E (Ultraverse)

Yet Marvel indice dashboard give me this:

filebeat-2015.11.25 (or any other indices)

Status: Not Available
Documents: N/A
Data: N/A
Total Shards: N/A
Unassigned Shards: N/A

There are no shards allocated.

All things works fine with ES 2.0.0 + Kibana 4.2.1 + marvel plugins.

In Marvel 2.1, we changed our index template a bit. Can you delete the .marvel-es-2015-11-25 index, and see if that does the trick?

Dashboard back to normal when I delete the .marvel* indices and wait for a auto-refresh, but if I reload the page, all shards related info vanished again.

Should I update the marvel plugin for ES and KI? and restart them?

Marvel's dashboard show it's version is 2.1.0

We need to improve the upgrade between 2.0 -> 2.1...

But for now and if you are OK with deleting some marvel indices then I think that the best to do is:

  • check that all the nodes of your cluster are running Elasticsearch 2.1.0 with the latest marvel Agent plugin (2.1.0)
  • use Kibana 4.3.0 and the latest Marvel plugin (2.1.0)
  • delete the marvel index template (DELETE /_template/.marvel-es) on your monitoring cluster
  • delete the .marvel-es-data and .marvel-es- indices on your monitoring cluster

After few seconds the indices should be created with the latest index template version.

-- Tanguy

I'm running ELK stack in docker container, and I will rebuild all the images when new version released.

Dockerfile I wrote is like below:

RUN curl -O
&& tar zxvf elasticsearch-2.1.0.tar.gz
&& rm elasticsearch-2.1.0.tar.gz
&& mv elasticsearch-2.1.0 elasticsearch

RUN elasticsearch/bin/plugin
-DproxyHost=$PROXY_HOST -DproxyPort=$PROXY_PORT
install license

RUN elasticsearch/bin/plugin
-DproxyHost=$PROXY_HOST -DproxyPort=$PROXY_PORT
install marvel-agent

COPY elasticsearch.yml elasticsearch/config/elasticsearch.yml

ENTRYPOINT ["/home/ubuntu/elasticsearch/bin/elasticsearch"]

EXPOSE 9200 9300

Kibana's Dockerfile is almost identical as above(expect for package name, plugins name and expose), and this mechanism works well when using ES 2.0.0 + Kibana 4.2.0/1

PS: I will destory all old containers before starting the new one, so cluster always started at fresh new state, without any old indices or templates in it.

Well, I think I found the reason for it ( but i'm not very sure ): my browser pc didn't reboot for about 2 month, the dashboard can show the shards info correctly after reboot.

I use latest firefox, and the browser's time is synced with server time, so I really can't get why a long running browser will cause this problem.

Anyway, thanks for everyone's efforts

Thanks for letting us know :wink: