Thank you @Christian_Dahlqvist for your reply.
I have 5 nodes in AWS. All nodes have master/data/ingest roles.
The instance type is r5.xlarge for all nodes.
Elasticsearch Plugin
discovery-ec-2
Elasticsearch Configuration.
cluster.name: axxxxxx-cda-es
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 0.0.0.0
discovery.zen.hosts_provider: ec2
discovery.ec2.endpoint: ec2.eu-west-1.amazonaws.com
discovery.ec2.availability_zones: eu-west-1c
xpack.security.enabled: false
xpack.monitoring.enabled: true
xpack.ml.enabled: false
xpack.graph.enabled: false
xpack.watcher.enabled: false
System Configuration
elasticsearch soft nofile 65536
elasticsearch hard nofile 65536
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited
ES HEAP SIZE=15g
Here is the example result when I queried from on-prem server with AWS ELB DNS.
curl -s http://axxxxxx-xxx.xxxx.aws-int.xxx.com:9200/_cat/nodes
10.97.147.132 68 79 1 0.00 0.00 0.00 mdi * jJJXeX5
10.97.145.7 6 58 0 0.00 0.00 0.00 mdi - qjsKDma
10.97.149.211 35 70 0 0.00 0.00 0.00 mdi - 0Nxucik
10.97.146.204 37 92 3 0.01 0.05 0.07 mdi - 1GHoG1z
10.97.146.203 55 96 1 0.04 0.02 0.03 mdi - aJNjk4e
curl -s http://axxxxxx-xxx.xxxx.aws-int.xxx.com:9200/_cat/indices
green open cda-puppetreportsdev-m OmP8omu4R7KDDjfZwBRY8A 1 1 67072 0 880.2mb 439.3mb
green open cda-puppetreports-w 4IV7EEBdQTKKtg1YIl-uPA 1 1 2400385 0 48.9gb 24.4gb
green open cda-lastreportstatusdev QaMXRyjlTUOsPV-z05JFhQ 1 1 439 6 123.8kb 64kb
green open .kibana-cda-puppetreports 5SWX8cNJRO2QqSPMB68lqQ 1 1 0 0 522b 261b
green open cda-lastreportstatus Z1MTtv57SwGXKtY1b7v6aA 1 1 12780 48 2mb 1mb
green open cda-puppetreportsdev-m-2020.07 ELHP9IUFQ0GOwsbtGoMqzw 1 1 63131 8501 915.2mb 457.7mb
green open cda-puppetreports-w-2020.07.20 S-eovdOFTHaWAzqEzj1CvQ 1 1 1289668 481331 35.6gb 18gb
curl -sv http://axxxxxx-xxx.xxxx.aws-int.xxx.com:9200/_cluster/health?pretty
About to connect() to axxxxxx-xxx.xxxx.aws-int.xxx.com port 9200 (#0)
Trying 10.97.158.162...
Connected to axxxxxx-xxx.xxxx.aws-int.xxx.com (10.97.158.162) port 9200 (#0)
GET /_cluster/health?pretty HTTP/1.1
User-Agent: curl/7.29.0
Host: axxxxxx-xxx.xxxx.aws-int.xxx.com:9200
Accept: /
^C
curl -sv -m 60 http://10.97.145.7:9200/_cluster/health?pretty
About to connect() to 10.97.145.7 port 9200 (#0)
Trying 10.97.145.7...
Connected to 10.97.145.7 (10.97.145.7) port 9200 (#0)
GET /_cluster/health?pretty HTTP/1.1
User-Agent: curl/7.29.0
Host: 10.97.145.7:9200
Accept: /
Operation timed out after 60001 milliseconds with 0 out of -1 bytes received
Closing connection 0
curl -sv -m 60 http://10.97.145.7:9200/_search
About to connect() to 10.97.145.7 port 9200 (#0)
Trying 10.97.145.7...
Connected to 10.97.145.7 (10.97.145.7) port 9200 (#0)
GET /_search HTTP/1.1
User-Agent: curl/7.29.0
Host: 10.97.145.7:9200
Accept: /
Operation timed out after 60000 milliseconds with 0 out of -1 bytes received
Closing connection 0
On AWS ES nodes, there are no issues.
[root@ip-10-97-145-7 ~]# ifconfig
eth0 Link encap:Ethernet HWaddr 0A:D0:38:9E:4B:F8
inet addr:10.97.145.7 Bcast:10.97.151.255 Mask:255.255.248.0
UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1
RX packets:4513379 errors:0 dropped:0 overruns:0 frame:0
TX packets:3286895 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:21120363749 (19.6 GiB) TX bytes:12954732518 (12.0 GiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:233 errors:0 dropped:0 overruns:0 frame:0
TX packets:233 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:42277 (41.2 KiB) TX bytes:42277 (41.2 KiB)
[root@ip-10-97-145-7 ~]# curl -s localhost:9200/_cluster/health?pretty
{
"cluster_name" : "a205171-cda-es",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 5,
"number_of_data_nodes" : 5,
"active_primary_shards" : 7,
"active_shards" : 14,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}