Talking nodes on same machine

Hi everyone,

i have to test my ElasticSearch behaviour when there are at least 2 nodes and they are in the same machine.
I created 2 directories for having 2 elasticsearch.yml to configure (i don't know if this approach is advisable).
These are my 2 elasticsearch.yml (they're the same each other):

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
node.max_local_storage_nodes: 3
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: localhost
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
#The default list of hosts is ["192.168.2.126", "[::1]"]
#
discovery.zen.ping.unicast.hosts: ["localhost:9300", "localhost:9301"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
discovery.zen.minimum_master_nodes: 2
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
xpack.security.enabled: false
xpack.security.audit.enabled: false
http.cors.enabled : true
http.cors.allow-origin : "*"
http.cors.allow-methods : OPTIONS, HEAD, GET, POST, PUT, DELETE
http.cors.allow-headers : X-Requested-With,X-Auth-Token,Content-Type, Content-Length, Authorization
http.cors.allow-credentials: true

path.repo: ["/home/giuseppe/Scrivania/elasticsearch-6.2.2/elasticsearch-backup"]

node.master: true

So, when i start my 2 elasticsearch instance, i have this behaviour:

Browse http://localhost:9200/_cluster/health?pretty
Response { "cluster_name" : "elasticsearch", "status" : "yellow", "timed_out" : false, "number_of_nodes" : 1, "number_of_data_nodes" : 1, "active_primary_shards" : 1329, "active_shards" : 1329, "relocating_shards" : 0, "initializing_shards" : 0, "unassigned_shards" : 1266, "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 0, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 0, "active_shards_percent_as_number" : 51.213872832369944 }

Browse http://localhost:9200/_cat/nodes
Response {"error":{"root_cause":[{"type":"null_pointer_exception","reason":null}],"type":"null_pointer_exception","reason":null},"status":500}

Browse http://localhost:9201/_cluster/health?pretty
Response { "cluster_name" : "elasticsearch", "status" : "red", "timed_out" : false, "number_of_nodes" : 2, "number_of_data_nodes" : 2, "active_primary_shards" : 14, "active_shards" : 14, "relocating_shards" : 0, "initializing_shards" : 4, "unassigned_shards" : 2651, "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 5, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 11311, "active_shards_percent_as_number" : 0.5245410266017235 }

Browse http://localhost:9201/_cat/nodes
Response 127.0.0.1 37 96 26 1.42 0.92 0.86 mdi - djVlhMR 127.0.0.1 36 96 27 1.42 0.92 0.86 mdi * aLwf7SY

Can anyone explain me this? And if my nodes are talking each other?

Thank you a lot!

Hi @Giuseppe_Merlo,

We need to see them logs :slight_smile:
You have a null pointer exception and the logs will show the stack trace .
Also, you need to have individual config files for each node in order to avoid port clashes (eg both nodes try to own 9200).

I created 2 directories for having 2 elasticsearch.yml to configure (i don't know if this approach is advisable).

Not advisable, but possible. In general if you've got beefy instances then use k8n to run multiple ES instances.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.