How to accelerate the searching if there is a large number of data in es?

ELK version: 5.5.1

There are 5 nodes in my es cluster.

I use elk to collect nginx logs, and there is a project generate about 100GB logs every day, and it is very slow to open the dashboard of this project in kibana...

So, how to make it faster to open the dashboard? Should I add some es nodes or replace the harddisk with ssd or do something else?

FYI we’ve renamed ELK to the Elastic Stack, otherwise Beats feels left out :wink:

Either of those would help. As would upgrading to 5.6.1.

However it'd help if you provided your node size, the number of indices and shards, OS and JVM version.

# cat /etc/issue
CentOS release 6.6 (Final)
Kernel \r on an \m
# java -version
java version "1.8.0_144"
Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)

62G mem

# free -g
             total       used       free     shared    buffers     cached
Mem:            62         61          0          0          0         13
-/+ buffers/cache:         48         14
Swap:            0          0          0

1T disk

# df -hT
Filesystem           Type   Size  Used Avail Use% Mounted on
/dev/sda2            ext4    40G   12G   26G  31% /
tmpfs                tmpfs   32G     0   32G   0% /dev/shm
/dev/sda1            ext4   190M   57M  124M  32% /boot
/dev/sda5            ext4   128G  7.3G  114G   6% /nh
/dev/sdb             ext4   917G  635G  235G  73% /data

Indices: 759

Total Shards: 6572

:sob:

You have a very high shard count. Can you use _shrink on some of them?

how to use shrink ...

https://www.elastic.co/guide/en/elasticsearch/reference/5.6/indices-shrink-index.html :slight_smile:

1 Like

thanx a lot, let me read it and have a try. :kissing_heart:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.