I am hoping someone can help me with my issue.
I have a 2 node production cluster of ES 5.0 and a standalone Kibana 5.0 server to access the data.
ES itself seems to be running well and the indexes are being created from my Syslog listener in Logstash.
I have issues however with Kibana getting Discover: Timeout exceeded 30000ms when trying to query data ranges older than say 4 hours.
I have increased the elasticsearch.timeout to a much higher amount and no instead of the Discover: Timeout exceeded I am getting Discover: Socket hang up and no results.
As stated ES is happily indexing and the monitoring show this to be the case.
The only way to resolve this issue is to reboot the cluster, which is obviously not something I want to do all the time.
For reference I have very large indexes (around 8GB for each node). And currently I have an index pattern for billinglog-* (so we can search all dates easily).
If I instead use a daily index pattern, such as billinglog-2016.11.16 I can seemingly search the full index without the error but only for some of the indexes. However some indexes still fail with the socket hang up error.
Is this simply an issue with the amount of data in the index? Or are there settings in ES that need to be set to allow faster querying of the indexes?
My specs for my ES servers are 4 CPU, 16GB with the ES_HEAP set to 8GB (as advised to be half the available memory)
Marvel isn't showing and memory issues so I don't think it is the machine spec necessarily.
Any help would be appreciated as we can't use the cluster if we can't view the data.