Elasticsearch cluster is not working

Config for 2nd server :

# ======================== Elasticsearch Configuration =========================

#

# NOTE: Elasticsearch comes with reasonable defaults for most settings.

# Before you set out to tweak and tune the configuration, make sure you

# understand what are you trying to accomplish and the consequences.

#

# The primary way of configuring a node is via this file. This template lists

# the most important settings you may want to configure for a production cluster.

#

# Please consult the documentation for further information on configuration options:

# https://www.elastic.co/guide/en/elasticsearch/reference/index.html

#

# ---------------------------------- Cluster -----------------------------------

#

# Use a descriptive name for your cluster:

#

cluster.name: campaygn-production

#

# ------------------------------------ Node ------------------------------------

#

# Use a descriptive name for the node:

#

node.name: campaygn-production-elasticsearch-node-1

#

# Add custom attributes to the node:

#

#node.attr.rack: r1

#

# ----------------------------------- Paths ------------------------------------

#

# Path to directory where to store the data (separate multiple locations by comma):

#

path.data: /var/lib/elasticsearch

#

# Path to log files:

#

path.logs: /var/log/elasticsearch

#

# ----------------------------------- Memory -----------------------------------

#

# Lock the memory on startup:

#

#bootstrap.memory_lock: true

#

# Make sure that the heap size is set to about half the memory available

# on the system and that the owner of the process is allowed to use this

# limit.

#

# Elasticsearch performs poorly when the system is swapping the memory.

#

# ---------------------------------- Network -----------------------------------

#

# Set the bind address to a specific IP (IPv4 or IPv6):

#

network.host: 142.93.42.161

#

# Set a custom port for HTTP:

#

http.port: 9200

#

# For more information, consult the network module documentation.

#

# --------------------------------- Discovery ----------------------------------

#

# Pass an initial list of hosts to perform discovery when new node is started:

# The default list of hosts is ["127.0.0.1", "[::1]"]

#

discovery.zen.ping.unicast.hosts: ["209.97.184.107","142.93.42.161"]

#

# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):

#

#discovery.zen.minimum_master_nodes: 

#

# For more information, consult the zen discovery module documentation.

#

# ---------------------------------- Gateway -----------------------------------

#

# Block initial recovery after a full cluster restart until N nodes are started:

#

#gateway.recover_after_nodes: 3

#

# For more information, consult the gateway module documentation.

#

# ---------------------------------- Various -----------------------------------

#

# Require explicit names when deleting indices:

#

#action.destructive_requires_name: true

indices.query.bool.max_clause_count: 100000

#define node 1 as master-eligible:

node.master: false

#define nodes 1 as data node:

node.data: true 

And logs for 2nd server:

[2018-11-14T10:46:57,186][INFO ][o.e.n.Node               ] [q3IrN9m] starting ...
[2018-11-14T10:46:57,354][INFO ][o.e.t.TransportService   ] [q3IrN9m] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
[2018-11-14T10:47:00,413][INFO ][o.e.c.s.ClusterService   ] [q3IrN9m] new_master {q3IrN9m}{q3IrN9mDQrS8TkwchxdqUw}{o9D1RMJBTdWXBEzJlvF-nQ}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2018-11-14T10:47:00,448][INFO ][o.e.h.n.Netty4HttpServerTransport] [q3IrN9m] publish_address {127.0.0.1:9200}, bound_addresses {[::1]:9200}, {127.0.0.1:9200}
[2018-11-14T10:47:00,448][INFO ][o.e.n.Node               ] [q3IrN9m] started
[2018-11-14T10:47:00,450][INFO ][o.e.g.GatewayService     ] [q3IrN9m] recovered [0] indices into cluster_state
[2018-11-14T10:57:50,942][INFO ][o.e.n.Node               ] [q3IrN9m] stopping ...
[2018-11-14T10:57:50,965][INFO ][o.e.n.Node               ] [q3IrN9m] stopped
[2018-11-14T10:57:50,965][INFO ][o.e.n.Node               ] [q3IrN9m] closing ...
[2018-11-14T10:57:50,974][INFO ][o.e.n.Node               ] [q3IrN9m] closed