Hi all,
I have created the dashboard in kibana, which is installed in the  5 nodes ES cluster,which has ES, kibana and lostash instaled..
Log is being forwarded from logstash(45 servers) -> kafka(3 brokers) ->logstash -> ES.
I get around this many documents and size of data per day in new index. I have around 30 indexes of each type. 5 shards per index, and 1 replica.
health status index                      pri rep docs.count docs.deleted store.size pri.store.size
green  open   metrics-2016.05.10   5   1   98612060            0       22gb         11gb
green  open   tomcat-2016.05.10   5   1   92744607            0     10.2gb         5.1gb
My servers are of 8GB ram, 2cpu and 200 GB volume.Each dashboard has got 9 visualization. My concern is,  visualization in dashboard  taking much time to load. Any ideas ?
             
            
              
              
              
            
            
           
          
            
            
              @bbhandari0121, I am sorry. I didn't completely understand your post. Have you already tried this configuration and are currently experiencing slow dashboard performance? Or are you asking if this configuration will lead to slow dashboard performance without actually testing it yet?
In addition to the specifics of your environment, the performance of the dashboard is going to depend on the structure of your indexed data, and the nature of the visualizations you are trying to execute. Can you provide additional information?
             
            
              
              
              
            
            
           
          
            
            
              You are correct, I have already tried this configuration and issuing slow dashboard performance.  I just went with dynamic mapping.  I have got the collectd metrics, which is just collecting the system performance, and tomcat logs.  And i am visualizing  the counting and unique count of the certain pattern of the tomcat logs.  similarly, using average and max aggregate  to build other kind of visualization from the metrics.. Hope fully that will provide you the needed information. Thank you
             
            
              
              
              
            
            
           
          
            
            
              How do the visualizations perform when you run the queries outside of kibana?
If you expand the spy panel of a visualization and click on the Request button, you can see the query that is ultimately sent to elasticsearch. You can then use that query in Sense or in an application like Postman to query elasticsearch directly. This might reveal some additional information or at the very least allow you to determine if the bottleneck is kibana or somewhere else.
             
            
              
              
              
            
            
           
          
            
            
              Individual search results  shows perfect result in elastic search as well, i mean its not taking that much time . Even individual visualization doesn't take that much time to load in KB itself, but when i combine 8 -9 visualization on the dashboard, thats when the issue is highlighted.
             
            
              
              
              
            
            
           
          
            
            
              We found after much playing around is that if you split your data up across multiple volumes, it has sped up our load times incredibly.
Elasticearch allows you to split datafiles across multiple mounts, but defining the following in the elasticsearch.yml file.  /mnt/md1, /mnt/md2 .  We are now splitting the data across two raid arrays on one system, and on the other (because we don't care if we loose it) we are running it across 5 separate drives that aren't part of a raid array.   Was a big improvement for us.
             
            
              
              
              
            
            
           
          
            
            
              You mean, add additional volume on the cloud and spread data across the mounts ? If so, we do that by adding path.data: /mnt/md1, /mnt/md2 on the elasticsearch.yml ? I dont think that will be possible on the single shards indexes ?
             
            
              
              
              
            
            
           
          
            
            
              The way it seems to work is that it splits the data across the two mounts. The way I thought is similar to striping data. Not sure how it would work out once you have data already. I would say new data would become striped, not sure that it is that clever to figure it out. You might have to export and RE-import.