Assistance with ELK

Hello

I have been using ELK for quite some time now and currently facing issues with memory utilization and timeouts on Kibana when running long search queries. My elk is a single node cluster and running htop i see lot of ES processes with 57% of memory being consumed by ES. I understand ES uses many threadpools. The total memory assigned is 32GB. While we are planning for a long term solution for this, in the mean time can I get some suggestion on what can we do to reduce resource utilization.

Thanks!

An i get errors as seen in the image whenever I try to get the data for larger time period like 1 year or so

Sounds like your cluster is "overloaded" or can't keep up with all the shards you have?

What is the output of:

GET /_cat/indices?v
GET /_cat/nodes?v
GET /_cat/health?v
GET /_cat/shards?v

Thanks for the response David! We use a single node cluster as of now.
Below is the requested detail. For the list of indices and shards i have just included the last line of the output to just include the count of them:

GET /_cat/indices?v
755 yellow open   logstash-2017.09.21 ECVj	KguDR5OLYYbOA8cglw   5   1        217            0    270.2kb        270.2kb

GET /_cat/nodes?v
ip        heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
127.0.0.1           33          90   2    0.32    0.29     0.29 mdi       *      QMN7LjG

GET /_cat/health?v
epoch      timestamp cluster       status node.total node.data shards  pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1556872627 08:37:07  elasticsearch yellow          1         1   3766 3766    0    0     3766             0                  -                 50.0%

GET /_cat/shards?v 	
7532 logstash-2018.03.14 0     p      STARTED        121  157.4kb 127.0.0.1 QMN7LjG
7533 logstash-2018.03.14 0     r      UNASSIGNED                            

Thanks

So you have 3766 on a 32GB HEAP. Correct?
We recommend no more than 20 shards per gb of HEAP on a data node. At most 640 shards per node.
Not including that your node is also managing the cluster (master node) and receiving the requests (coordinating node) coming from Kibana. For each request, you have to hit (and keep in memory) every single shard response to merge the result then and send back to Kibana the results.

I think you can now imagine why you are seeing this behavior.

How to fix?

Multiple options (that you can combine):

  • Change the default number of shards to 1. You can have up to 50gb par shard. One of the index you have has 270.2kb of data on 5 shards!
  • Use the shrink API
  • Use the rollup API to compress data with less granularity
  • Start new nodes (and if you don't need replicas, set the number of replicas to 0 first)
  • Remove old indices

May I suggest you look at the following resources about sizing:

https://www.elastic.co/elasticon/conf/2016/sf/quantitative-cluster-sizing

And https://www.elastic.co/webinars/using-rally-to-get-your-elasticsearch-cluster-size-right

Hi David, Thanks much for the reply. I will try to use these options suggested and contact you back incase I have any further questions.

Thanks!

Just a quick question. To check number of shards this is the query:
"GET _cat/shards"

From what I see with this, we have about 7533 shards. But I see the number 3766 unders 'shards' when I run "GET /_cat/health?v"

epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1556872627 08:37:07 elasticsearch yellow 1 1 3766 3766 0 0 3766 0 - 50.0%

And I guess you pointed out at the number 3766 from this query. Would like to know why the difference is and what is the actual no of shards that need to be considered.

Thanks!

Thats the count of replica shard too , i guess.

Exact

Hi David
I have few queries regarding the same:

  1. As I only have single node, enabling replica shard is of no use as even the replica set will remain in the same node. So I can set the number of replicas to 0. Please correct me if I'm wrong.

a. As you recommend I can change the default no of shards to 1. And from the doc we have 2 options to change 'index.number_of_shards' and 'index.number_of_replicas'

a. We can add below lines in elasticsearch.yml

index.number_of_shards: 1
index.number_of_replicas: 0

OR

b. We can make use of templates which will apply the changes to new indices.

As of now i have one template which looks like this:

indent preformatted text by 4 spaces

{
"logstash": {
"order": 0,
"version": 50001,
"template": "logstash-",
"settings": {
"index": {
"refresh_interval": "5s"
}
},
"mappings": {
"default": {
"_all": {
"enabled": true,
"norms": false
},
"dynamic_templates": [
{
"message_field": {
"path_match": "message",
"match_mapping_type": "string",
"mapping": {
"type": "text",
"norms": false
}
}
},
{
"string_fields": {
"match": "
",
"match_mapping_type": "string",
"mapping": {
"type": "text",
"norms": false,
"fields": {
"keyword": {
"type": "keyword"
}
}
}
}
}
],
"properties": {
"@timestamp": {
"type": "date",
"include_in_all": false
},
"@version": {
"type": "keyword",
"include_in_all": false
},
"geoip": {
"dynamic": true,
"properties": {
"ip": {
"type": "ip"
},
"location": {
"type": "geo_point"
},
"latitude": {
"type": "half_float"
},
"longitude": {
"type": "half_float"
}
}
}
}
}
},
"aliases": {}
}
}

From the doc https://www.elastic.co/guide/en/elasticsearch/reference/5.6/indices-templates.html I can use below template to add
"number_of_shards": 1 and "number_of_replicas": 0

PUT _template/template_1
{
"template": "logstash-*",
"settings": {
"number_of_shards": 1
"number_of_replicas": 0
},
"mappings": {
"type1": {
"_source": {
"enabled": false
},
"properties": {
"host_name": {
"type": "keyword"
},
"created_at": {
"type": "date",
"format": "EEE MMM dd HH:mm:ss Z YYYY"
}
}
}
}
}

Now my question is, If I use this new template, what happens to my existing template? Will there be any issues? OR should I make any changes to the second (new ) template with regards to the first and already existing template?

And I understand that this change will apply only to new indices and if we want the changes to be applied to already created indices we have to make use of shrink API

Thanks!

Changing in elasticsearch.yml is not supported anymore.
You need to do that in a template as you proposed.

And I understand that this change will apply only to new indices and if we want the changes to be applied to already created indices we have to make use of shrink API

Correct. This will be applied to newly created indices. Existing indices won't change.

Thanks David!

Just to clarify, adding the new template (shared before) will not have any affect on the existing template correct?

Regards!

I'd overwrite the existing template.
Or you need to create a new one which won't affect indeed the existing one.

Templates will be "merged" in memory before being applied to index creation.
You should give it a try in a test env to make sure that it works as you expect before pushing this to production.

Alright, Thanks David.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.