Hello. I tried curling to my ec2 instance's instance of elasticsearch but got a connection refused error. I tried doing this while remoting into it via ssh as follows:
Funny thing is that on my local machine, elasticsearch works out of the box, so to speak with no need for firewall configurations to send curl requests to it locally.
I restored my elasticsearch.yml to the default settings, which is with everything commented out as uncommenting didn't help:
y# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 3
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
I'm guessing that by logs you mean what I get from running journalctl -u elasticsearch. In which case, I got the following:
Oct 19 18:35:02 ip-172-31-27-17 systemd[1]: Started Elasticsearch.
Oct 19 18:35:02 ip-172-31-27-17 elasticsearch[8463]: OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000085330000, 2060255232, 0) failed; error='Cannot allocate memory' (errn
Oct 19 18:35:02 ip-172-31-27-17 elasticsearch[8463]: #
Oct 19 18:35:02 ip-172-31-27-17 elasticsearch[8463]: # There is insufficient memory for the Java Runtime Environment to continue.
Oct 19 18:35:02 ip-172-31-27-17 elasticsearch[8463]: # Native memory allocation (mmap) failed to map 2060255232 bytes for committing reserved memory.
Oct 19 18:35:02 ip-172-31-27-17 elasticsearch[8463]: # An error report file with more information is saved as:
Oct 19 18:35:02 ip-172-31-27-17 elasticsearch[8463]: # /tmp/hs_err_pid8463.log
Oct 19 18:35:02 ip-172-31-27-17 systemd[1]: elasticsearch.service: Main process exited, code=exited, status=1/FAILURE
Oct 19 18:35:02 ip-172-31-27-17 systemd[1]: elasticsearch.service: Unit entered failed state.
Oct 19 18:35:02 ip-172-31-27-17 systemd[1]: elasticsearch.service: Failed with result 'exit-code'.
From this, I think I've determined that openjdk 8 can't run on whatever's left of the 1GB of RAM this ec2 instance comes with. I suppose my question is, how can I get around that bottleneck? Thanks!
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.