Low performance

Hi

I have elastic cluster of 2 nodes with same hardware specifications
hardware specs of each machine:
RAM:16G
No.of Cpu's : 2

node1- master+data
node2- master+data

discovery.zen.minimum_master_nodes: 1

Here are details of index:

No.of shards - 5
No.of replicas- 1

index-size - 1.3G and has nearly 6 lac documents(only primary shards,without replicas, I am not sure why replicas are in unassigned state), which will increase at speed of ~6k documents per day.

I have a dashboard of nearly 12 visualizations, all are PIE charts and its taking more time load the page.(nearly 10-13 secs)

Are the above mentioned hardware resources are good enough for my requirement or any change in shard, replica count to get a better performance?

Thanks in advance

As you have 2 master-eligible nodes, discovery.zen.minimum_master_nodes should be set to 2 as per these guidelines.

Are these virtual machines? What kind of storage do you have? Are you indexing at the same time you are querying?

How much heap have you got assigned? Which Elasticsearch version are you using?

I put discovery.zen.minimum_master_nodes: 2, I saw the same performance
yes they are virtual machines, I think the resources(cpus) are shared
How much heap have you got assigned? - I am not sure how to get this value,
How much heap have you got assigned? - can you help me in getting that will be great
Are you indexing at the same time you are querying? - I didn't get your question, can you elaborate?
Which Elasticsearch version are you using? - 6.1.2
Storage - is an nfs mount

sign in to Kibana (http://machineip:5601 unless you configured it otherwise) , under monitoring go to nodes, open each node and it will tell you the max heap size in there on the JVM heap chart

ps -ef| grep elastic
root 9886 1 7 06:26 ? 00:01:18 /data/elasticsearch/jdk1.8.0_131//bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -server -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -XX:+HeapDumpOnOutOfMemoryError -Des.path.home=/data/elasticsearch/elasticsearch-6.1.2 -Des.path.conf=/data/elasticsearch/elasticsearch-6.1.2/config -cp /data/elasticsearch/elasticsearch-6.1.2/lib/* org.elasticsearch.bootstrap.Elasticsearch -d

-Xms1g -Xmx1g

as per https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html

My heap size is 1G

NFS storage is not recommended as it can be very slow and also cause corruption.

This is not for improving performance, but rather to give better resilience and avoid data loss.

That is quite a small heap, but should be sufficient given the amount of data you have. If you see long or frequent GC being reported in the logs you may want to increase this, but keep it at or below 50% of RAM.

Make sure that you have access to the resources you think you have. I would recommend installing monitoring to get a good view of how the cluster performs.

Actually the storage is ext4, not nfs

I didn't see any log regarding GC, I see only one log that too in one node
[2018-07-12T06:29:23,957][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][201] overhead, spent [260ms] collecting in the last [1s]

Anyway I set Heap to 4G

still its taking 14 secs to load the dashboard, after refreshing it several times it was loading bit early nearly in 7 secs

by the way my index has 259 fields and the count may increase slowly, anything can be done here?
And some fields are like nested(ex: level1.level2.level3 and level1.level2.level3.keyword)
I have two fields per key(keyword is coming by default)

I am attaching screenshots of my cluster overview in normal time and during the time of dashboard loading(used cerebro monitoring tool)
PFA

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.