[data1] node validation exception

I have one master node and one data node in my cluster. The master node is up and I am able to telnet to the master node from the data node on port 9200. But when starting my data node I get the following in the logs.

[2020-06-06T23:09:20,838][DEBUG][i.n.b.ByteBufUtil        ] -Dio.netty.allocator.type: pooled
[2020-06-06T23:09:20,838][DEBUG][i.n.b.ByteBufUtil        ] -Dio.netty.threadLocalDirectBufferSize: 65536
[2020-06-06T23:09:20,839][DEBUG][i.n.b.ByteBufUtil        ] -Dio.netty.maxThreadLocalCharBufferSize: 16384
[2020-06-06T23:09:20,905][DEBUG][o.e.t.n.Netty4Transport  ] [data1] Bound profile [default] to address {192.168.0.79:9300}
[2020-06-06T23:09:20,907][INFO ][o.e.t.TransportService   ] [data1] publish_address {192.168.0.79:9300}, bound_addresses {192.168.0.79:9300}
[2020-06-06T23:09:20,947][INFO ][o.e.b.BootstrapChecks    ] [data1] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2020-06-06T23:09:20,959][ERROR][o.e.b.Bootstrap          ] [data1] node validation exception
[1] bootstrap checks failed
[1]: JVM is using the client VM [OpenJDK Client VM] but should be using a server VM for the best performance
[2020-06-06T23:09:20,965][INFO ][o.e.n.Node               ] [data1] stopping ...
[2020-06-06T23:09:21,044][INFO ][o.e.n.Node               ] [data1] stopped
[2020-06-06T23:09:21,045][INFO ][o.e.n.Node               ] [data1] closing ...
[2020-06-06T23:09:21,089][INFO ][o.e.n.Node               ] [data1] closed

I have my yml set as this:

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: data1
node.data: false
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: ["localhost", "192.168.0.79"]
network.host: "192.168.0.79"
bootstrap.system_call_filter: false
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.zen.ping.unicast.hosts: ["192.168.0.42"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes:
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

I have also tried lowering the memory options to test. Anything else you could suggest I can try?

Thanks

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.