When add new node in elasticsearch 5.4 old node deleted

hello sir

i not have 64GB ram

i only have 4 GB

i want to test with 3 data node and 1 master node with real client data so i want to have in one server and if any reboot happen it would not be lost

about the reboot

as per log other data node left the master node so only master node available

here is some log

this log generate after the reboot the server

[2017-05-24T11:33:01,191][INFO ][o.e.n.Node               ] [node_2] version[5.4.0], pid[2465], build[780f8c4/2017-04-28T17:43:27.229Z], OS[Linux/3.10.0-514.16.1.el7.x86_64/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_131/25.131-b11]
[2017-05-24T11:33:02,340][INFO ][o.e.p.PluginsService     ] [node_2] loaded module [aggs-matrix-stats]
[2017-05-24T11:33:02,341][INFO ][o.e.p.PluginsService     ] [node_2] loaded module [ingest-common]
[2017-05-24T11:33:02,341][INFO ][o.e.p.PluginsService     ] [node_2] loaded module [lang-expression]
[2017-05-24T11:33:02,341][INFO ][o.e.p.PluginsService     ] [node_2] loaded module [lang-groovy]
[2017-05-24T11:33:02,341][INFO ][o.e.p.PluginsService     ] [node_2] loaded module [lang-mustache]
[2017-05-24T11:33:02,341][INFO ][o.e.p.PluginsService     ] [node_2] loaded module [lang-painless]
[2017-05-24T11:33:02,341][INFO ][o.e.p.PluginsService     ] [node_2] loaded module [percolator]
[2017-05-24T11:33:02,341][INFO ][o.e.p.PluginsService     ] [node_2] loaded module [reindex]
[2017-05-24T11:33:02,341][INFO ][o.e.p.PluginsService     ] [node_2] loaded module [transport-netty3]
[2017-05-24T11:33:02,341][INFO ][o.e.p.PluginsService     ] [node_2] loaded module [transport-netty4]
[2017-05-24T11:33:02,342][INFO ][o.e.p.PluginsService     ] [node_2] no plugins loaded
[2017-05-24T11:33:04,369][INFO ][o.e.d.DiscoveryModule    ] [node_2] using discovery type [zen]
[2017-05-24T11:33:04,994][INFO ][o.e.n.Node               ] [node_2] initialized
[2017-05-24T11:33:04,994][INFO ][o.e.n.Node               ] [node_2] starting ...
[2017-05-24T11:33:05,153][INFO ][o.e.t.TransportService   ] [node_2] publish_address {127.0.0.1:9301}, bound_addresses {[::1]:9301}, {127.0.0.1:9301}
[2017-05-24T11:33:08,387][INFO ][o.e.c.s.ClusterService   ] [node_2] detected_master {node_1}{xs6GUEkCQ9GE9xhUfQPk-A}{e_0Cx2dlRxiFF1ILVbF-fw}{127.0.0.1}{127.0.0.1:9300}, added {{node_1}{xs6GUEkCQ9GE9xhUfQPk-A}{e_0Cx2dlRxiFF1ILVbF-fw}{127.0.0.1}{127.0.0.1:9300},}, reason: zen-disco-receive(from master [master {node_1}{xs6GUEkCQ9GE9xhUfQPk-A}{e_0Cx2dlRxiFF1ILVbF-fw}{127.0.0.1}{127.0.0.1:9300} committed version [3]])
[2017-05-24T11:33:08,393][INFO ][o.e.c.s.ClusterSettings  ] [node_2] updating [cluster.routing.allocation.enable] from [ALL] to [none]
[2017-05-24T11:33:08,432][INFO ][o.e.h.n.Netty4HttpServerTransport] [node_2] publish_address {127.0.0.1:9201}, bound_addresses {[::1]:9201}, {127.0.0.1:9201}
[2017-05-24T11:33:08,434][INFO ][o.e.n.Node               ] [node_2] started
[2017-05-24T11:33:35,140][INFO ][o.e.d.z.ZenDiscovery     ] [node_2] master_left [{node_1}{xs6GUEkCQ9GE9xhUfQPk-A}{e_0Cx2dlRxiFF1ILVbF-fw}{127.0.0.1}{127.0.0.1:9300}], reason [transport disconnected]
[2017-05-24T11:33:35,142][WARN ][o.e.d.z.ZenDiscovery     ] [node_2] master left (reason = transport disconnected), current nodes: nodes: 
   {node_2}{H9ROPBHaT5GvR_DKj42W6w}{Y9tcoUChTpWJbwnpVYXlyQ}{127.0.0.1}{127.0.0.1:9301}, local
   {node_1}{xs6GUEkCQ9GE9xhUfQPk-A}{e_0Cx2dlRxiFF1ILVbF-fw}{127.0.0.1}{127.0.0.1:9300}, master

No. Don't do that. You have only 4Gb. You can't run 4 nodes with 3Gb of heap for each.
That is not going to work and you are just going to simulate problems that you are not going to have in production on real servers.

Just run one node.

it would not be lost

Oh BTW data is never lost. It's somewhere on your hard disk waiting for you to start again the nodes.

if i increase the server RAM to 64GB then i can do that ?

because i want to do anyhow this setting so it will be good for future

means again add the node via command line then change the node id

May be. But why would you upgrade to 64gb instead of running 4 physical servers with 4gb or 8gb each?

it will be good for future

If you mean that it will be good for a production usage, the answer is still "no, don't do that".

If you have one server with 64 Gb or less of RAM then run one elasticsearch instance with half of the memory allocated to the HEAP.

means again add the node via command line then change the node id

Means start again the node via the same command line you used previously.

is this possible to re-cover the left node via command line without change it's id because that i am storing in db for query and other logic as per customer

if i use 64GB RAM then how much node i can create in one server

i need in one server because i want to create a dynamic process that create a node for customer in single click and save development time

Yes. All data is on disk in the directory that you set with path.data.

In production? One node.

May be just start one server with one elasticsearch node as a service. Let it run.

Then as elasticsearch is multitenancy, just create and drop indices as you need them.

By the way, may be you want to have us running that for you? If so have a look at https://cloud.elastic.co/ where you can have a managed instance of elasticsearch (+kibana, +x-pack).

i know that but i will change the node it if i use command

i know this service but i want to get done in my server with research

Not sure of what you mean and why you want this.

hello sir ,

thanks for reply so far

one more question about the list of node that left the master

don't you thing that's ping time issue or any other tcp issue

I don't know.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.