Logstash sending less data to ElasticSearch Cluster

Hello,
I am sending data from filebeats to a logstash machine and then to ElasticSearch cluster.
I have persistent queue enabled on logstash.The queue gets filled very quickly.I can't figure out out reason why it is not sending data to ElasticSearch.The network monitor on logstash machine looks odd.Can anyone help me with the issue.


The graph is MiB per minute.So its 43MiB per minute at 19:53 in image
Green is receiving and yellow is transmitting.I dont know why it is exactly symmetric.I have done speed test on this machine and it can give speed upto 500 Mb/s download and 500 Mb/s upload speed.

What is the hardware specification of the Elasticsearch cluster?

1 co-ordination node,1 master node and 4 data nodes.
2 data nodes are master eligible as well.
Data Nodes:
Ram : 31.548 GB
Total disk : 2.3 TB
Co-ordination node:

Model vCPU Memory (GiB) Instance Storage (GiB) Network Bandwidth (Gbps) EBS Bandwidth (Mbps)
c5n.large 2 5.25 EBS-Only Up to 25 Up to 3,500

MASTER NODE:
Ram : 3.549 GB

Total disk : 31 GB

Model name: Intel(R) Xeon(R) Platinum 8124M CPU @ 3.00GHz
Stepping: 3
CPU MHz: 3000.000
BogoMIPS: 6000.00
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 25344K

This shouldn't be ingestion problem I think.Why is the network graph symmetric.

What instance types are you using for the data nodes? If it is all deployed on AWS it would be great if you could list all instance types used.

Data nodes:r5.xlarge (4 machines)
Master node:c5.large
Co-ordinating node:c5n.large
Logstash machine :c4.xlarge
I have given half of the ram in each machine to jvm
Logstash is receiving data at rate of 1MB/s from several filebeats

Any idea?

Is there anything in Elasticsearch logs or stats that is indicating that it is struggling or limited by resources so Logstash would apply back pressure? What does CPU usage, GC and disk I/O look like on the Elasticsearch nodes?

No,
I am just sending from this 1 logstash machine.
Logstash machine is running on centos6.Can this be a problem?

I don’t know but would check Elasticsearch first. Do you output to any other systems?

yes I write to disk as well as send to ElasticSearch
You have any idea about why upload speed+download speed is a constant.Though my machine can support higher speed transfers

Any help?

Did you check Elasticsearch?

Yes,
No errors or warnings in elasticsearch logs
So logging has not stopped.But logstash is ingesting at lowser rate .And also will adding 1 more co-ordinator help?

Do you have monitoring installed on Elasticsearch? If so, what is the reported indexing rate? What does heap usage look like? What about CPU usage?

Indexing at 14,000/s
And heap usage of whole cluster 50%

What about CPU usage? What about disk I/O and iowait, e.g. through netstat -x?

How much CPU is the Logstash machine using?

On Kibana dashboard:
JVM heap usage = 50% avg of all
CPU usage:
Co-ordinator=12%
[node-data-1]=37%,#shards=190
[node-data-2]=78%,#shards=190
[node-data-3]=40%,#shards=190
[node-data-4]=36%,#shards=190
master = 0%
Logstash machine =60% cpu usage
only logstash is running on logstash machine