How to improve performance of relevant term chart. Not loading up for large data

I am new to this elastic and kibana set up so saying sorry in advance if asking some basic questions.

I am using a 9 node cluster among which
3 master 8gb/2cpu
4 data 32gb/6cpu
2 client/kibana node 16gb/4cpu

I have size of data

You can see the details of cluster in image.
I am using relevant term functionality in kibana for this data. This data is over 3 months.

I am getting relevant terms in transcript field which i have analyzed removed all punctuation and stop words. For less data like for 15 days the graph loads up. when i increase it over a month the graph dosent load. How can I improve the performance of this.

For 15 days I get these results

But when i turn time to 3 months i dont get anything.

Hi @priyal,

Are there any errors in the developer console? Maybe look at the network requests and see if you can inspect the response and see if you can spot an error

Thanks,
Chris

@chrisronline

This is the result i get in graph for 3 months data the whole data that i have in cluster.

.
is there any setting to improve the performance. I check the console i didn't get any error.

@chrisronline
Also if I may ask how to see network request?

Can you please provide the full output of the cluster stats API?

@Christian_Dahlqvist here is the output :slight_smile:

{
"_nodes" : {
"total" : 9,
"successful" : 9,
"failed" : 0
},
"cluster_name" : "Dos-Elk_Calls_Transcript",
"timestamp" : 1511204161764,
"status" : "green",
"indices" : {
"count" : 2,
"shards" : {
"total" : 12,
"primaries" : 6,
"replication" : 1.0,
"index" : {
"shards" : {
"min" : 2,
"max" : 10,
"avg" : 6.0
},
"primaries" : {
"min" : 1,
"max" : 5,
"avg" : 3.0
},
"replication" : {
"min" : 1.0,
"max" : 1.0,
"avg" : 1.0
}
}
},
"docs" : {
"count" : 9087383,
"deleted" : 7
},
"store" : {
"size" : "111.2gb",
"size_in_bytes" : 119442086334,
"throttle_time" : "0s",
"throttle_time_in_millis" : 0
},
"fielddata" : {
"memory_size" : "0b",
"memory_size_in_bytes" : 0,
"evictions" : 0
},
"query_cache" : {
"memory_size" : "0b",
"memory_size_in_bytes" : 0,
"total_count" : 0,
"hit_count" : 0,
"miss_count" : 0,
"cache_size" : 0,
"cache_count" : 0,
"evictions" : 0
},
"completion" : {
"size" : "0b",
"size_in_bytes" : 0
},
"segments" : {
"count" : 269,
"memory" : "168.7mb",
"memory_in_bytes" : 176988517,
"terms_memory" : "142mb",
"terms_memory_in_bytes" : 148954745,
"stored_fields_memory" : "18.8mb",
"stored_fields_memory_in_bytes" : 19778064,
"term_vectors_memory" : "3.1mb",
"term_vectors_memory_in_bytes" : 3294112,
"norms_memory" : "733.5kb",
"norms_memory_in_bytes" : 751104,
"points_memory" : "1.9mb",
"points_memory_in_bytes" : 2095672,
"doc_values_memory" : "2mb",
"doc_values_memory_in_bytes" : 2114820,
"index_writer_memory" : "0b",
"index_writer_memory_in_bytes" : 0,
"version_map_memory" : "0b",
"version_map_memory_in_bytes" : 0,
"fixed_bit_set" : "0b",
"fixed_bit_set_memory_in_bytes" : 0,
"max_unsafe_auto_id_timestamp" : -1,
"file_sizes" : { }
}
},
"nodes" : {
"count" : {
"total" : 9,
"data" : 4,
"coordinating_only" : 2,
"master" : 3,
"ingest" : 4
},
"versions" : [
"5.6.3"
],
"os" : {
"available_processors" : 44,
"allocated_processors" : 44,
"names" : [
{
"name" : "Linux",
"count" : 9
}
],
"mem" : {
"total" : "180.2gb",
"total_in_bytes" : 193549418496,
"free" : "24.6gb",
"free_in_bytes" : 26420674560,
"used" : "155.6gb",
"used_in_bytes" : 167128743936,
"free_percent" : 14,
"used_percent" : 86
}
},
"process" : {
"cpu" : {
"percent" : 0
},
"open_file_descriptors" : {
"min" : 340,
"max" : 398,
"avg" : 369
}
},
"jvm" : {
"max_uptime" : "1.6h",
"max_uptime_in_millis" : 5862031,
"versions" : [
{
"version" : "1.8.0_151",
"vm_name" : "Java HotSpot(TM) 64-Bit Server VM",
"vm_version" : "25.151-b12",
"vm_vendor" : "Oracle Corporation",
"count" : 9
}
],
"mem" : {
"heap_used" : "3gb",
"heap_used_in_bytes" : 3324930592,
"heap_max" : "88.6gb",
"heap_max_in_bytes" : 95179505664
},
"threads" : 369
},
"fs" : {
"total" : "1.6tb",
"total_in_bytes" : 1813068460032,
"free" : "1.5tb",
"free_in_bytes" : 1683792908288,
"available" : "1.4tb",
"available_in_bytes" : 1591481704448
},
"plugins" : [ ],
"network_types" : {
"transport_types" : {
"netty4" : 9
},
"http_types" : {
"netty4" : 9
}
}
}
}

That looks pretty good. Can't see anything obviously alarming with that so far.

What does CPU and disk I/O look like on the data nodes in the cluster when you run the query over the longer time period?

@Christian_Dahlqvist its not that bad the cpu goes up to 50 to 60 % on data node when I try to load the graph for large data. after that the graph just goes out.

here is the snip

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.