When add new node in elasticsearch 5.4 old node deleted

Hello sir ,

i think you are not getting me

i am running like

./bin/elasticsearch -d -Ecluster.name=my_cluster -Enode.name=node_1

and it's create node as well

but issue is that when i close the terminal where i fired the above command

then my node get deleted

Can you share the logs?

May be try with nohup bin/elasticsearch... &

nohup gives this error "nohup: ignoring input and appending output to ‘nohup.out’"

about the log that i already share with you but you don't understand that so let me know which specific path log you want

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

i read here : Configuring Elasticsearch | Elasticsearch Guide [8.15] | Elastic
"The configuration files should contain settings which are node-specific"

and then i tried

bin/elasticsearch -Epath.conf=/etc/elasticsearch/nodes/node_2.yml

and this is output

ERROR: no log4j2.properties found; tried [/etc/elasticsearch/nodes/node_2.yml] and its subdirectories

if you even let me know about this then also big help

That's not how it works. Read: https://www.elastic.co/guide/en/elasticsearch/reference/current/settings.html#_config_file_location

Also I just realized that you have a small Heap size. Did you change memory settings?

so can't create multiple yml file for each node ?

and this is heap size

-Xms256m
-Xmx256m

i have to change if i increase then it's give error

Here is log again

command

bin/elasticsearch -Epath.data=/usr/share/elasticsearch/data/node_2 -Epath.logs=/var/log/elasticsearch/node_2 -Enode.name=node_2 -Enode.data=true -Enode.master=false

Log

[2017-05-24T09:15:59,021][INFO ][o.e.n.Node               ] [node_2] initializing ...
[2017-05-24T09:15:59,109][INFO ][o.e.e.NodeEnvironment    ] [node_2] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [52.8gb], net total_space [59gb], spins? [unknown], types [rootfs]
[2017-05-24T09:15:59,109][INFO ][o.e.e.NodeEnvironment    ] [node_2] heap size [247.5mb], compressed ordinary object pointers [true]
[2017-05-24T09:15:59,110][INFO ][o.e.n.Node               ] [node_2] node name [node_2], node ID [H9ROPBHaT5GvR_DKj42W6w]
[2017-05-24T09:15:59,111][INFO ][o.e.n.Node               ] [node_2] version[5.4.0], pid[20903], build[780f8c4/2017-04-28T17:43:27.229Z], OS[Linux/3.10.0-514.16.1.el7.x86_64/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_131/25.131-b11]
[2017-05-24T09:16:00,205][INFO ][o.e.p.PluginsService     ] [node_2] loaded module [aggs-matrix-stats]
[2017-05-24T09:16:00,205][INFO ][o.e.p.PluginsService     ] [node_2] loaded module [ingest-common]
[2017-05-24T09:16:00,205][INFO ][o.e.p.PluginsService     ] [node_2] loaded module [lang-expression]
[2017-05-24T09:16:00,205][INFO ][o.e.p.PluginsService     ] [node_2] loaded module [lang-groovy]
[2017-05-24T09:16:00,205][INFO ][o.e.p.PluginsService     ] [node_2] loaded module [lang-mustache]
[2017-05-24T09:16:00,206][INFO ][o.e.p.PluginsService     ] [node_2] loaded module [lang-painless]
[2017-05-24T09:16:00,206][INFO ][o.e.p.PluginsService     ] [node_2] loaded module [percolator]
[2017-05-24T09:16:00,206][INFO ][o.e.p.PluginsService     ] [node_2] loaded module [reindex]
[2017-05-24T09:16:00,206][INFO ][o.e.p.PluginsService     ] [node_2] loaded module [transport-netty3]
[2017-05-24T09:16:00,206][INFO ][o.e.p.PluginsService     ] [node_2] loaded module [transport-netty4]
[2017-05-24T09:16:00,207][INFO ][o.e.p.PluginsService     ] [node_2] no plugins loaded
[2017-05-24T09:16:01,923][INFO ][o.e.d.DiscoveryModule    ] [node_2] using discovery type [zen]
[2017-05-24T09:16:02,568][INFO ][o.e.n.Node               ] [node_2] initialized
[2017-05-24T09:16:02,568][INFO ][o.e.n.Node               ] [node_2] starting ...
[2017-05-24T09:16:02,738][INFO ][o.e.t.TransportService   ] [node_2] publish_address {127.0.0.1:9303}, bound_addresses {[::1]:9303}, {127.0.0.1:9303}
[2017-05-24T09:16:05,919][INFO ][o.e.c.s.ClusterService   ] [node_2] detected_master {node_1}{ttz3YT1cSRmD-NLw4vO26g}{eHO5rYNGTQaqJYwxg6ZCsw}{127.0.0.1}{127.0.0.1:9300}, added {{node_1}{ttz3YT1cSRmD-NLw4vO26g}{eHO5rYNGTQaqJYwxg6ZCsw}{127.0.0.1}{127.0.0.1:9300},}, reason: zen-disco-receive(from master [master {node_1}{ttz3YT1cSRmD-NLw4vO26g}{eHO5rYNGTQaqJYwxg6ZCsw}{127.0.0.1}{127.0.0.1:9300} committed version [11]])
[2017-05-24T09:16:05,975][INFO ][o.e.h.n.Netty4HttpServerTransport] [node_2] publish_address {127.0.0.1:9203}, bound_addresses {[::1]:9203}, {127.0.0.1:9203}
[2017-05-24T09:16:05,978][INFO ][o.e.n.Node               ] [node_2] started

output of curl -XGET 'http://localhost:9200/_cluster/state?pretty'

{
  "cluster_name" : "cs",
  "version" : 11,
  "state_uuid" : "1src4cYpRnuX7x6DSANP6w",
  "master_node" : "ttz3YT1cSRmD-NLw4vO26g",
  "blocks" : { },
  "nodes" : {
    "ttz3YT1cSRmD-NLw4vO26g" : {
      "name" : "node_1",
      "ephemeral_id" : "eHO5rYNGTQaqJYwxg6ZCsw",
      "transport_address" : "127.0.0.1:9300",
      "attributes" : { }
    },
    "H9ROPBHaT5GvR_DKj42W6w" : {
      "name" : "node_2",
      "ephemeral_id" : "vpmX6_u1QLqTrqzR0fW7aA",
      "transport_address" : "127.0.0.1:9303",
      "attributes" : { }
    }
  },
  "metadata" : {
    "cluster_uuid" : "7JtoIEjOTSW9BdPj4sRlSQ",
    "templates" : { },
    "indices" : { },
    "index-graveyard" : {
      "tombstones" : [ ]
    }
  },
  "routing_table" : {
    "indices" : { }
  },
  "routing_nodes" : {
    "unassigned" : [ ],
    "nodes" : {
      "H9ROPBHaT5GvR_DKj42W6w" : [ ]
    }
  }
}

till here everything seems fine

but now i close the the terminal or stop the command via ctrl+z

then node deleted

output

curl -XGET 'http://localhost:9200/_cluster/state?pretty'
{
  "cluster_name" : "cs",
  "version" : 12,
  "state_uuid" : "og9UQzt0QRS-Yx_jZAHVoQ",
  "master_node" : "ttz3YT1cSRmD-NLw4vO26g",
  "blocks" : { },
  "nodes" : {
    "ttz3YT1cSRmD-NLw4vO26g" : {
      "name" : "node_1",
      "ephemeral_id" : "eHO5rYNGTQaqJYwxg6ZCsw",
      "transport_address" : "127.0.0.1:9300",
      "attributes" : { }
    }
  },
  "metadata" : {
    "cluster_uuid" : "7JtoIEjOTSW9BdPj4sRlSQ",
    "templates" : { },
    "indices" : { },
    "index-graveyard" : {
      "tombstones" : [ ]
    }
  },
  "routing_table" : {
    "indices" : { }
  },
  "routing_nodes" : {
    "unassigned" : [ ],
    "nodes" : { }
  }
}

You can. Just follow the explanations I linked to.

About memory and all that: why do you want to run more than one node if you can't give at least 1gb per node?

i have many gb i can give that but gives error to me as per i explain earlier

i have followed that their are nothing about the close the process remove the node

please highlight something

bin/elasticsearch -Epath.data=/usr/share/elasticsearch/data/node_2 -Epath.logs=/var/log/elasticsearch/node_2 -Enode.name=node_2 -Enode.data=true -Enode.master=false

You didn't start as a daemon. You did not use nohup...

Which error please? This is all confusing I'm afraid.

nohup give me error

that i already put

nohup: ignoring input and appending output to ‘nohup.out’

about the memory error let me generate and put

No. Nohup does not give an error AFAICT.

Elasticsearch has two configuration files:

elasticsearch.yml for configuring Elasticsearch, and
log4j2.properties for configuring Elasticsearch logging.
These files are located in the config directory, whose location defaults to $ES_HOME/config/. The Debian and RPM packages set the config directory location to /etc/elasticsearch/.

The location of the config directory can be changed with the path.conf setting, as follows:

./bin/elasticsearch -Epath.conf=/path/to/my/config/

hello sir

can you come on skype so i can show you realtime problem

i tried my best to explain you

please come for 2 min then we solve the issue

thanks

No sorry. That's not possible.

ok let me try last time

i can create node successfully via command it's working awesome

and also showing in result

but when i restart the service of the elasticsearch via

service elasticsearch restart

then only node from elasticsearch.yml is there other node are deleted

hope this help to you

Great.

But why are you doing that if you did not start a node from the command line?

I don't understand. What other nodes?


This is really confusing. I have no idea if someone else understand what you are describing.
Sometimes you say that you want to start more than one node on the same machine. I'm asking why you want to do that but you don't answer to this question.

Then I'm telling you that you probably modified the JVM options but you don't really mentioned it at the beginning. I told you to leave the default jvm settings and you told me that you can't because this is generating an error which you did not share although I asked for it.

So let me restart from the beginning assuming that you are doing tests and not production.

Remove all existing previous installation of elasticsearch
Download elasticsearch tar.gz distribution
Uncompress it
Then run bin/elasticsearch

and that's all. If this is not working please share your logs.

Again, if you are doing tests, just keep your terminal open until you finished your test, then quit elasticsearch and close the terminal.

If after having one node running successfully, you want (for tests only) run another node on your machine, run in another terminal:

bin/elasticsearch --Epath.data=./data2 --Epath.logs=./logs2

That should be ok. Don't close the terminal.


If what you want to do is totally different than a test or what I described, please tell us what you want to do.
I'm afraid I can't help if I don't know what problems you are trying to solve here.

sir ,

i have issue at start of the chat

then i solve many of these myself by research

so clear all from your mind

now here is the issue

when i reboot the server then default node is there other nodes goes deleted

why i want to create multiple nodes

i want to test this multiple nodes to store data and performed the test

why i need to reboot the server

i am not doing that but i am thinking about the future then some update perform on server and need reboot

so that's my big problem about the data lost

please try to understand only this message remove old from your mind

thanks

Let's look at every point:

when i reboot the server then default node is there other nodes goes deleted

What are the other nodes? You are supposed to launch only one node per physical machine unless you have more 64 gb of RAM on your machine. Do you have more than 64Gb of RAM?

why i want to create multiple nodes
i want to test this multiple nodes to store data and performed the test

Ok. So it's a test. Then just start nodes from the command line as I explained.
Then close the nodes after your tests. You don't need to run elasticsearch as a service in the context of tests.

why i need to reboot the server
i am not doing that but i am thinking about the future then some update perform on server and need reboot

In that case, install elasticsearch as a service.
If you have a Debian based, read: Install Elasticsearch with Debian Package | Elasticsearch Guide [8.11] | Elastic
If you have a Redhat based, read: Install Elasticsearch with RPM | Elasticsearch Guide [8.11] | Elastic

But don't install multiple elasticsearch instances as a service. It won't make sense.
Instead, start another physical server and install again elasticsearch as a service on it.

Then, any time you will reboot your machine, elasticsearch will be restarted as it is a service.