Kibana response time is too slow, need help identifying why

Hello,

Like many others, I have the ELK stack. With very little data in elastic
search, kibana 3 is super fast, but in my production environment, kibana
sometimes even fails to show any data.

Here are my hardware specs.

kibana + ES + nginx = m2.2xlarge + 20GB JVM Heap + 1TB ssd EBS volume
5 other ES machines = m3.xlarge + 10GB JVM Heap + 1TB ebs EBS volumes

We are doing about 150GB per index, 600 million documents.

6 shards per index, replication 1.

I don't know if I'm severely under provisioned in the amount of machines I
need, or if my Kibana is misconfigured. Using the ES "head" plugin, I can
run a search against a Logstash index and search for a host and get a
really fast response time so my suspicion is with Kibana.

http://mylogstash_server.com:9200/logstash-2014.8.03/_search

{
"query" : {
"term" : { "host" : "web03d" }
}
}

Thanks to all who care to respond!
Tony

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/6083da50-6524-41b0-8555-7c81dd6c5cfb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

You could check the slow log or hot threads to see if there is anything.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: markw@campaignmonitor.com
web: www.campaignmonitor.com

On 5 August 2014 07:42, Tony Chong tonyjchong@gmail.com wrote:

Hello,

Like many others, I have the ELK stack. With very little data in elastic
search, kibana 3 is super fast, but in my production environment, kibana
sometimes even fails to show any data.

Here are my hardware specs.

kibana + ES + nginx = m2.2xlarge + 20GB JVM Heap + 1TB ssd EBS volume
5 other ES machines = m3.xlarge + 10GB JVM Heap + 1TB ebs EBS volumes

We are doing about 150GB per index, 600 million documents.

6 shards per index, replication 1.

I don't know if I'm severely under provisioned in the amount of machines I
need, or if my Kibana is misconfigured. Using the ES "head" plugin, I can
run a search against a Logstash index and search for a host and get a
really fast response time so my suspicion is with Kibana.

http://mylogstash_server.com:9200/logstash-2014.8.03/_search

{
"query" : {
"term" : { "host" : "web03d" }
}
}

Thanks to all who care to respond!
Tony

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/6083da50-6524-41b0-8555-7c81dd6c5cfb%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/6083da50-6524-41b0-8555-7c81dd6c5cfb%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEM624Z7VDM2D3uhtP%2B3KHCQfjUPCD_MO5Xs-6iUFbwLyagFUA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Well, my slow logs are 0 bytes. My logging.yml looks okay but I don't think
they are configured. I looked at the ES docs and saw that I should have
these set somewhere. I'm thinking elastic search.yml configuration file?

#index.search.slowlog.threshold.query.warn: 10s
#index.search.slowlog.threshold.query.info: 5s
#index.search.slowlog.threshold.query.debug: 2s
#index.search.slowlog.threshold.query.trace: 500ms

#index.search.slowlog.threshold.fetch.warn: 1s
#index.search.slowlog.threshold.fetch.info: 800ms
#index.search.slowlog.threshold.fetch.debug: 500ms
#index.search.slowlog.threshold.fetch.trace: 200ms

And querying for hot threads never returns a response. I have marvel
installed as well. Is there something else I can look at? Thanks,

Tony

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/1354b750-e44a-4138-864a-153449f99df4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Turns out you shouldn't use the head plugin when querying for hot threads.
I was able to get them by querying the API directly. Thanks for the tip!

On Monday, August 4, 2014 11:28:16 PM UTC-7, Tony Chong wrote:

Well, my slow logs are 0 bytes. My logging.yml looks okay but I don't
think they are configured. I looked at the ES docs and saw that I should
have these set somewhere. I'm thinking Elasticsearch.yml configuration
file?

#index.search.slowlog.threshold.query.warn: 10s
#index.search.slowlog.threshold.query.info: 5s
#index.search.slowlog.threshold.query.debug: 2s
#index.search.slowlog.threshold.query.trace: 500ms

#index.search.slowlog.threshold.fetch.warn: 1s
#index.search.slowlog.threshold.fetch.info: 800ms
#index.search.slowlog.threshold.fetch.debug: 500ms
#index.search.slowlog.threshold.fetch.trace: 200ms

And querying for hot threads never returns a response. I have marvel
installed as well. Is there something else I can look at? Thanks,

Tony

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/5e9c8cc8-ef5b-422c-b15e-48264036028d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.