Connection refused when trying to run on aws ec2

Hello. I tried curling to my ec2 instance's instance of elasticsearch but got a connection refused error. I tried doing this while remoting into it via ssh as follows:

curl -X GET http://127.0.0.1:9200

and variations using localhost or 0.0.0.0

I had also tried using the public ip from my local machine to no avail

I had tried setting up my ec2 instance to have a security profile where all traffic to and from my ncoming and outgoing ports are open.

I had also tried to play around with the yml file as described here to no avail: Elastic Search - Connection Refused when tried to access from different System

Funny thing is that on my local machine, elasticsearch works out of the box, so to speak with no need for firewall configurations to send curl requests to it locally.

Any help would be much appreciated.

What are your elasticsearch.yml settings? network.host?
What are the logs? (Formatted please)

I restored my elasticsearch.yml to the default settings, which is with everything commented out as uncommenting didn't help:

y# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 3
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

I'm guessing that by logs you mean what I get from running journalctl -u elasticsearch. In which case, I got the following:

Oct 19 18:35:02 ip-172-31-27-17 systemd[1]: Started Elasticsearch.
Oct 19 18:35:02 ip-172-31-27-17 elasticsearch[8463]: OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000085330000, 2060255232, 0) failed; error='Cannot allocate memory' (errn
Oct 19 18:35:02 ip-172-31-27-17 elasticsearch[8463]: #
Oct 19 18:35:02 ip-172-31-27-17 elasticsearch[8463]: # There is insufficient memory for the Java Runtime Environment to continue.
Oct 19 18:35:02 ip-172-31-27-17 elasticsearch[8463]: # Native memory allocation (mmap) failed to map 2060255232 bytes for committing reserved memory.
Oct 19 18:35:02 ip-172-31-27-17 elasticsearch[8463]: # An error report file with more information is saved as:
Oct 19 18:35:02 ip-172-31-27-17 elasticsearch[8463]: # /tmp/hs_err_pid8463.log
Oct 19 18:35:02 ip-172-31-27-17 systemd[1]: elasticsearch.service: Main process exited, code=exited, status=1/FAILURE
Oct 19 18:35:02 ip-172-31-27-17 systemd[1]: elasticsearch.service: Unit entered failed state.
Oct 19 18:35:02 ip-172-31-27-17 systemd[1]: elasticsearch.service: Failed with result 'exit-code'.

From this, I think I've determined that openjdk 8 can't run on whatever's left of the 1GB of RAM this ec2 instance comes with. I suppose my question is, how can I get around that bottleneck? Thanks!

By default elasticsearch tries to allocate 2gb for the heap which means that you should probably start with 4gb instances.

You can decrease the heap size but don’t expect miracles with 512mb of Heap.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.