Kibana dashboards embedded in my website became blank after a while

i embedded two dashboards in my website, one with summary statistics about year-to-date info, the other with summary statistics about the last 24 hours info. the dashboards have been showing nice statistics since a week ago when i built it, till yesterday, when i loaded my website and noticed that the two dashboards became empty. is the dashboard i embedded into my website not changeable by anyone else that visits the website?

i went to my kibana on amazon EC2 t2-median, and noticed the default index was gone. then i checked with ubuntu operating system on my mac terminal and i saw all daily log files for the past one week.

i am not sure why this happened, i remember i included my elasticsearch url "" which is for amazon EC2 t2-median in my main.go program and deployed my tiny go project (with simply post and get function) onto the gcloud through its free trial VM.
my main.go program does include a session for checking existing index and create new index part, would this interrupt and wipe out my original index that i was actually running for my other project with amazon EC2?

func main() {
// Create a client
client, err := elastic.NewClient(elastic.SetURL(ES_URL), elastic.SetSniff(false))
if err != nil {

  // Use the IndexExists service to check if a specified index exists.
  exists, err := client.IndexExists(INDEX).Do()
  if err != nil {
  if !exists {
         // Create a new index.
         mapping := `{
         _, err := client.CreateIndex(INDEX).Body(mapping).Do()
         if err != nil {
                // Handle error
  http.HandleFunc("/post", handlerPost)
  http.HandleFunc("/search", handlerSearch)
  log.Fatal(http.ListenAndServe(":8080", nil))


Hi @nicedee,

When I load the website, I see this error:
Could not locate that dashboard (id: dashboard-20170915-2)

Are you able to load the dashboard within Kibana instance? Do you see the same error there?


all dashboards and visualizations i previously built are gone. noticed it last weekend.
i also checked and noticed my logstash was stopped on remote end so i restarted it.

yeah, i think i used the dashboards previously built on 9/15/17(the second version of mine), that's how the dashboard naming comes from i guess.

do you know what file names i should look for to further investigate on this issue, including log files? i noticed several log files on remote end, such as
localhost_access_log.2017-09-27.txt catalina.2017-09-27.log localhost.2017-09-27.log
host-manager.2017-09-27.log manager.2017-09-27.log

localhost_access_log.xxxx-xx-xx.txt seems to contain all index info for that day, not sure whether i could add this info from my previous index into my current index? never thought the plan to handle such accident, :cry:

If port 9200 on your Elasticsearch instance is internet accessible and not secured then this could be random vandalism. People routinely port scan the entire IP V4 address space, so if you allow access to port 9200 then people will find it.

I get this even on our corporate intranet. The security teams port scan everything and from time to time randomly named indexes show up in my POC cluster. That's OK by me, I rebuild it from scratch anyways every few weeks.

how do i check whether my port 9200 is secured, and if not secured how to set it as secured?

why people routinely port scan the entire IP v4 address space, and what would they do if they find mine exposed there? what are they able to do with it?

i don't mind rebuilding it after this accident, but is there any way to get the previous index contents back to my current index?

actually my index could be replaced with new index due to the go program i used. i am still confused about that as well.:confused:

You'll definitely want to look into using a firewall to prevent access to sensitive endpoints for Elasticsearch. This blog post talks about some approaches using proxies.

is there any way to get the previous index contents back to my current index?

Is the data still in Elasticsearch? Can you query for it?

i see a lot of things recorded in localhost_access_log.xxxx-xx-xx.txt files through ubuntu, and i see records loaded from 9/24/17 on kibana

no, i don't think i could query old data stuff. i created a new geoip map just now and it only shows geoip from the time i created this new visualization..

i did one more step just now, by running the geo.conf file through ubuntu, but i thought it should be automatically ran once i restarted logstash through ubuntu, no?

@nicedee When you're creating visualizations, are you adjusting the relative time in the top right? If the data is in Elasticsearch, you will be able to visualize it, assuming you adjust the time selector appropriately.

i could see log traffic data from files like localhost_access_log.2017-09-16.txt but get onto kibana UI, i only see data from 9/24/17, all data earlier than that does not show up.

now i am thinking of downloading accumulated useful log files from ubuntu to local, with the same private key that i used to upload stuff, but in vain. any luck knowing about it?

i created a post just now about that topic as well. humm.:sweat:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.