How do I make Dashboards 'stick'?


(Tim Dunphy) #1

I find that every time I go to my dashboards section, I have to do 2 things.

  1. I have to load my saved dashboards.
  2. I have to set the theme back to 'dark' ( I just simply prefer the dark theme).

How do I make these 2 options stick? So that everytime I go to the dashboards section of kibana I can see my dashboards without having to load them?


(KMG) #2

What version of kibana are you using ?


(Tim Dunphy) #3

I'm using Kibana 4.2.1


(Tim Dunphy) #4

Anyone have any ideas on this one?

Thanks


(Chris Earle) #5

Hi @bluthundr, are you by any chance deleting the Elasticsearch data that Kibana is connected too? It saves its configuration to Elasticsearch, so if anything happens to it (e.g., it moves), then it will need to be reconfigured.


#6

+1 to this. My dark theme sticks but I am frequently seeing "Ready to get started?" I've been saving my Dashboard after every change due to this....loading and saving is simple enough, but it is a minor annoyance. This is on 4.3.1.


(Tim Dunphy) #7

Hey guys,

I am deleting all my indexes. But I only do that once a week, after taking a snapshot of the cluster. This happens once a week. I don't need to persist my Logstash data any longer than that.

But I don't have to wait that long before I see the "Ready to get started" page. I can go to my dashboards, load the saved dashboard (my only one), and then go back to the main "Discover" tab and spend some time there, if I go back to the dashboards at times I'll see my dashboards again and I won't have to load the saved dashboards. At others I will see the 'Ready to get started" screen and I'll have to load the saved dashboards again.

But I don't have to wait a week for the dashboards to be missing. This can happen within the same hour as checking the dashboards.

Also when I delete the indexes I'm doing it safely, using the 'curator' tool. It takes care to not delete any current kibana indexes I think:

[root@logs:/opt] #/bin/curator --http_auth "admin:$ES_PASS" delete  indices --all-indices
2015-12-27 12:11:41,199 INFO      Job starting: delete indices
2015-12-27 12:11:41,272 INFO      Matching all indices. Ignoring flags other than --exclude.
2015-12-27 12:11:41,272 INFO      Pruning Kibana-related indices to prevent accidental deletion.
2015-12-27 12:11:41,273 INFO      Action delete will be performed on the following indices: [u'.marvel-es-2015.12.27', u'.marvel-es-data', u'logstash-2015.12.27']
2015-12-27 12:11:41,283 INFO      Deleting indices as a batch operation:
2015-12-27 12:11:41,283 INFO      ---deleting index .marvel-es-2015.12.27
2015-12-27 12:11:41,283 INFO      ---deleting index .marvel-es-data
2015-12-27 12:11:41,283 INFO      ---deleting index logstash-2015.12.27
2015-12-27 12:11:42,749 INFO      Job completed successfully.

This is what the indexes look like after this operation:

[root@logs:/opt] #/bin/curator --http_auth "admin:$ES_PASS" show  indices --all-indices
2015-12-27 12:11:51,218 INFO      Job starting: show indices
2015-12-27 12:11:51,234 INFO      Matching all indices. Ignoring flags other than --exclude.
2015-12-27 12:11:51,234 INFO      Action show will be performed on the following indices: [u'.kibana', u'.marvel-es-2015.12.27', u'.marvel-es-data', u'logstash-2015.12.27']
2015-12-27 12:11:51,234 INFO      Matching indices:
.kibana
.marvel-es-2015.12.27
.marvel-es-data
logstash-2015.12.27

Just so you have an idea, these are the crons that I have setup for this:

* 3  * * * 7 /bin/curator --http_auth "admin:$ES_PASS" snapshot --repository jf_backup --prefix jokefire- --ignore_unavailable --partial indices --all-indices 2>&1 >& /dev/null
* 7  * * * 7 /bin/curator --http_auth "admin:$ES_PASS" delete  indices --all-indices 2>&1 >& /dev/null  && /bin/systemctl restart logstash 2>&1 >& /dev/null

As DigiAngel says this is a minor annoyance, and it's on Kibana 4.3.1.


(Tim Dunphy) #8

Guys.. can I get a bump? Still would like an answer for this one. If one exists.

Thanks


(system) #9