ES response shows ~20 secs delay when observed from the browser network logs

Hi All,

We are observing a random ~20 sec delay on ES responses on an ES cluster with 2 nodes created on 2 AWS EC2 servers spread across 2 subnets. I followed the steps in the below ref link to generate CA and elastic-stack-ca.p12 to enable SSL connection between the 2 nodes, Kibana on node1 is using default http_ca.crt for ssl config. I not sure if there is something that I am missing here, any help is much appreciated.

Below are my ES yaml files shared from both instances named node-1 and node-2

node-1-elasticsearch-yml

cluster.name: test-cluster
node.name: node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: [_local_,_site_]
http.port: 9200
discovery.seed_hosts: ["10.2.58.x", "10.2.68.x"]
discovery.seed_providers: ec2
discovery.ec2.endpoint: ec2.eu-central-1.amazonaws.com
discovery.ec2.tag.cluster_name: test-cluster
cloud.node.auto_attributes: true
cluster.routing.allocation.awareness.attributes: aws_availability_zone
logger.org.elasticsearch.discovery.ec2: "TRACE"
cluster.initial_master_nodes: ["node-1", "node-2"]
action.auto_create_index: .monitoring*,.watches,.triggered_watches,.watcher-history*
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
xpack.security.http.ssl:
  enabled: false
  keystore.path: certs/http.p12
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: /etc/elasticsearch/certs/elastic-certificates.p12
  truststore.path: /etc/elasticsearch/certs/elastic-certificates.p12
http.host: 0.0.0.0

node-2-elasticsearch.yml

cluster.name: test-cluster
node.name: node-2
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: [_local_,_site_]
http.port: 9200
discovery.seed_hosts: ["10.2.58.x", "10.2.68.x"]
discovery.seed_providers: ec2
discovery.ec2.endpoint: ec2.eu-central-1.amazonaws.com
discovery.ec2.tag.cluster_name: test-cluster
cloud.node.auto_attributes: true
cluster.routing.allocation.awareness.attributes: aws_availability_zone
logger.org.elasticsearch.discovery.ec2: "TRACE"
cluster.initial_master_nodes: ["node-1", "node-2"]
action.auto_create_index: .monitoring*,.watches,.triggered_watches,.watcher-history*
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
xpack.security.http.ssl:
  enabled: false
  keystore.path: certs/http.p12
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: /etc/elasticsearch/certs/elastic-certificates.p12
  truststore.path: /etc/elasticsearch/certs/elastic-certificates.p12
http.host: 0.0.0.0

kibana.yml(deployed on same server as node)

server.host: "0.0.0.0"
server.publicBaseUrl: "http://10.2.58.x:5601"
server.name: "test-kibana"
elasticsearch.hosts: ["http://10.2.58.x:9200", "http://10.2.68.x:9200"]
elasticsearch.username: "kibana_system"
elasticsearch.password: "xxxxx"
elasticsearch.ssl.certificateAuthorities: [ "/etc/kibana/certs/http_ca.crt" ]

Reference link: Set up basic security for the Elastic Stack | Elasticsearch Guide [8.0] | Elastic

Note: Additionally, I installed ec2 plugins, setup jvm options and memlock.

How far apart are the two nodes deployed? What instance types are you using?

Also note that having two nodes in the cluster does not provide high availability. If you lose one of your master eligible nodes, temporarily or permanently, the cluster will not be fully functional. At least 3 master eligible nodes are required to create a cluster that can fully function if you lose one of the nodes.

Hey Christian, thank you for responding.

Here are the instance sizes and subnet cidrs as below, they are split between two availability zones.

Instance type: m5.2xlarge (8 vCPU and 32GiB)  with EBS volume for storage
subnet-a: 10.2.58.x/24
subnet-b: 10.2.68.x/24

What would be a suggested configuration for 3rd tie breaker node and for making only one node master eligible per the above shared link, should I update the config on all nodes to cluster.initial_master_nodes: ["node-1"]