ElasticSearch index creation is 4 days behind in 6.5.4

I have recently upgraded my production cluster to 6.5.4 Seeing there are so many delays in ingesting high volume logs in Cluster.

I'm getting around 3tb from 560 filebeat clients which use multiple prosecutors among them 2 logs are very high volume of logs which generates 1 to 2tb index

which shows me they are behind in ingesting

here is my current data flow to elasticsearch

560 filebeat => 3 Logstash server => Elasticsearch Dedicated ingest nodes 4 => Data 6 data nodes..
not sure why filebeat or logstash is delaying process any inputs will be helpfull

What does CPU usage on Logstash and dedicated ingest node look like? What is the specification of your data nodes? What does CPU, disk I/O and iowait look like on these nodes?

Currently using Dell 720 PoweEdge Servers which as 190G of memory pretty high configuration CPU

[root@elk-es-ho-16 ~]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 45
Model name: Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
Stepping: 7
CPU MHz: 3162.243
CPU max MHz: 3300.0000
CPU min MHz: 1200.0000
BogoMIPS: 5200.04
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 20480K
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid xsaveopt dtherm ida arat pln pts spec_ctrl intel_stibp flush_l1d
[root@elk-es-ho-16 ~]#

Thanks / Ravi

That does not really answer the question.

Sorry about it didn't read the question properly average is %1000 spikes about 2542 in logstash
elasticsearch data nodes about %1327

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
47120 logstash 39 19 0.148t 0.104t 18020 S 983.4 56.6 3449:40 java

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
17390 elastic+ 20 0 2.885t 0.103t 9.877g S 1327 55.6 101751:57 java

Disk space 75TB

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.