Hi
So we have and ELK server (1) where we are storing netflow data of 4 devices, apart from being recenlty a little slow we dont have many issues, but when we perform a search (From the dashboard tab in kibana) for more than 15 days, kibana simply timeout.
I see some logs related to overloading of the server but i have no idea what to do....
Should i scale vertically or horizontally? > I have already increased vm resources to double but they do not seem to solve the issue
Its there a way to improve the performance ?
Server Specs Actual
1 Core
RAM 3.5Gbs
Server Specs increase test
2 Core
RAM 7Gbs
Logs on the server ==================
Caused by: com.google.common.util.concurrent.UncheckedExecutionException: ElasticsearchException[CircuitBreakingException[[fielddata] Data too large, data for [netflow.ipv4_dst_addr] would be larger than limit of [2002806374/1.8gb]]]; nested: UncheckedExecutionException[CircuitBreakingException[[fielddata] Data too large, data for [netflow.ipv4_dst_addr] would be larger than limit of [2002806374/1.8gb]]]; nested: CircuitBreakingException[[fielddata] Data too large, data for [netflow.ipv4_dst_addr] would be larger than limit of [2002806374/1.8gb]];
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2203)
at com.google.common.cache.LocalCache.get(LocalCache.java:3937)
at com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4739)
at org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache$IndexFieldCache.load(IndicesFieldDataCache.java:183)
at org.elasticsearch.index.fielddata.plain.AbstractIndexOrdinalsFieldData.loadGlobal(AbstractIndexOrdinalsFieldData.java:86)
... 19 more
============
[2016-12-07 17:26:03,869][DEBUG][action.search ] [node-node-1] [XXXXX_netflow-2016.11.13][2], node[3TnCPr9PTdWLzulS_PlQTQ], [P], v[20], s[STARTED],$
RemoteTransportException[[node-node-1][172.28.63.7:9300][indices:data/read/search[phase/query]]]; nested: EsRejectedExecutionException[rejected execution of org$
Caused by: EsRejectedExecutionException[rejected execution of org.elasticsearch.transport.TransportService$4@5816b15e on EsThreadPoolExecutor[search, queue capacity = $
at org.elasticsearch.common.util.concurrent.EsAbortPolicy.rejectedExecution(EsAbortPolicy.java:50)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369)
at org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor.execute(EsThreadPoolExecutor.java:85)
at org.elasticsearch.transport.TransportService.sendLocalRequest(TransportService.java:372)
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:327)
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:299)
at org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteQuery(SearchServiceTransportAction.java:142)
at org.elasticsearch.action.search.SearchCountAsyncAction.sendExecuteFirstPhase(SearchCountAsyncAction.java:53)
at org.elasticsearch.action.search.AbstractSearchAsyncAction.performFirstPhase(AbstractSearchAsyncAction.java:144)
at org.elasticsearch.action.search.AbstractSearchAsyncAction.start(AbstractSearchAsyncAction.java:126)
at org.elasticsearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:115)
at org.elasticsearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:47)
at org.elasticsearch.action.support.TransportAction.doExecute(TransportAction.java:149)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:137)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:85)
at org.elasticsearch.action.search.TransportMultiSearchAction.doExecute(TransportMultiSearchAction.java:63)
at org.elasticsearch.action.search.TransportMultiSearchAction.doExecute(TransportMultiSearchAction.java:39)
==============
We have 8 visualizations configured, mostly Aggregation>Terms
{
"title": "New Visualization",
"type": "table",
"params": {
"perPage": 10,
"showPartialRows": false,
"showMeticsAtAllLevels": false
},
"aggs": [
{
"id": "1",
"type": "sum",
"schema": "metric",
"params": {
"field": "netflow.in_bytes"
}
},
{
"id": "2",
"type": "terms",
"schema": "bucket",
"params": {
"field": "netflow.ipv4_src_addr",
"size": 20,
"order": "desc",
"orderBy": "1"
}
}
],
"listeners": {}
}