Elasticsearch service suddenly stop after few minutes started

HI,
i built cluster elasticserach which is 1 master, 1 logstash, 1 elastic
i found this error when i start elasticservice in logstash server.

and actualy i have another problem, my index always reset every 2 days(sometimes 1), so i lost data for past days in disvocer kibana. i already search all books and still cant solve iti wish u guys can help me with this :'

anw, here's an error log for my elastic in logstatsh server

s/0/indices/XSJ3HP2BQqC55Qa9lzPrAQ/_state, /var/lib/elasticsearch/nodes/0/indices/fowk7hO4TfW80Mdoy9auzA/_state, /var/lib/elasticsearch/nodes/0/indices/DPgsIHlgR-OYmIqcI-C6lw/_state, /var/lib/elasticsearch/nodes/0/indices/rIw08c6qSPmjbic9us9z_g/_state, /var/lib/elasticsearch/nodes/0/indices/pdkXTF0xRWq2DjwjiNYgIw/_state]. Use 'elasticsearch-node repurpose' tool to clean up
        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:174) ~[elasticsearch-7.8.1.jar:7.8.1]
        at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:161) ~[elasticsearch-7.8.1.jar:7.8.1]
        at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-7.8.1.jar:7.8.1]
        at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:127) ~[elasticsearch-cli-7.8.1.jar:7.8.1]
        at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-7.8.1.jar:7.8.1]
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:126) ~[elasticsearch-7.8.1.jar:7.8.1]
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-7.8.1.jar:7.8.1]
Caused by: java.lang.IllegalStateException: Node is started with node.data=false and node.master=false, but has index metadata: [/var/lib/elasticsearch/nodes/0/indices/SxJ56RwDQVO19AVlGadfMA/_state, /var/lib/elasticsearch/nodes/0/indices/z6eQJlmNSEKfk3RUhV0h6Q/_state, /var/lib/elasticsearch/nodes/0/indices/ibsNQm64SlKVztrnJczvtg/_state, /var/lib/elasticsearch/nodes/0/indices/MtDX-cinQ-qA1Km86nLGYw/_state, /var/lib/elasticsearch/nodes/0/indices/N693jJcMSwaxXg7Tzn0VEw/_state, /var/lib/elasticsearch/nodes/0/indices/cyvU29w4S8WpZC4NbAX9lw/_state, /var/lib/elasticsearch/nodes/0/indices/qFtM2--cQGudVHFSPxVjig/_state, /var/lib/elasticsearch/nodes/0/indices/TnCuQlCvSyGlFDgicvZQVQ/_state, /var/lib/elasticsearch/nodes/0/indices/GHsxoJg8QAGBjOonmXnTxA/_state, /var/lib/elasticsearch/nodes/0/indices/DPITBErkQP6Xq34ideK44A/_state, /var/lib/elasticsearch/nodes/0/indices/bHbxDimmTA6K4qXHKeDMtQ/_state, /var/lib/elasticsearch/nodes/0/indices/NlmtFAVfTDiyUnZEORGkmw/_state, /var/lib/elasticsearch/nodes/0/indices/_yn_nMquSYq9RWB36hd--A/_state, /var/lib/elasticsearch/nodes/0/indices/-eX4z-qsQYu8dgR8w4kfjw/_state, /var/lib/elasticsearch/nodes/0/indices/hdh5H96lSlymlwifI10hhw/_state, /var/lib/elasticsearch/nodes/0/indices/ZttrJQsFRZSzUq7U-7i_Xg/_state, /var/lib/elasticsearch/nodes/0/indices/ekcTij0HTu-7FOr6Au526w/_state, /var/lib/elasticsearch/nodes/0/indices/i53Wxz86TCux6sYwaWMeHg/_state, /var/lib/elasticsearch/nodes/0/indices/X5f8E8SDTlGbrb70O6zEKg/_state, /var/lib/elasticsearch/nodes/0/indices/XSJ3HP2BQqC55Qa9lzPrAQ/_state, /var/lib/elasticsearch/nodes/0/indices/fowk7hO4TfW80Mdoy9auzA/_state, /var/lib/elasticsearch/nodes/0/indices/DPgsIHlgR-OYmIqcI-C6lw/_state, /var/lib/elasticsearch/nodes/0/indices/rIw08c6qSPmjbic9us9z_g/_state, /var/lib/elasticsearch/nodes/0/indices/pdkXTF0xRWq2DjwjiNYgIw/_state]. Use 'elasticsearch-node repurpose' tool to clean up
        at org.elasticsearch.env.NodeEnvironment.ensureNoIndexMetadata(NodeEnvironment.java:1097) ~[elasticsearch-7.8.1.jar:7.8.1]
        at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:323) ~[elasticsearch-7.8.1.jar:7.8.1]
        at org.elasticsearch.node.Node.<init>(Node.java:335) ~[elasticsearch-7.8.1.jar:7.8.1]
        at org.elasticsearch.node.Node.<init>(Node.java:266) ~[elasticsearch-7.8.1.jar:7.8.1]
        at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:227) ~[elasticsearch-7.8.1.jar:7.8.1]
        at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:227) ~[elasticsearch-7.8.1.jar:7.8.1]
        at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:393) ~[elasticsearch-7.8.1.jar:7.8.1]
        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:170) ~[elasticsearch-7.8.1.jar:7.8.1]
        ... 6 more
(END)

We aren't all guys :slight_smile:

Can you please post more of the Elasticsearch log, and also your elasticsearch.yml file.

What do you mean by reset? What is in your Elasticsearch logs?
Is your cluster exposed to the internet by chance?

sorryy we use to say guys for all gend here XD

here's log from master :

[2020-12-01T12:12:36,418][WARN ][o.e.c.r.a.AllocationService] [elastic-master] [filebeat-7.7.1-2020.12.01][0] marking unavailable shards as stale: [qq4wtXlcSu-P1ua_QM00Rg]
[2020-12-01T12:13:04,712][INFO ][o.e.c.r.DelayedAllocationService] [elastic-master] scheduling reroute for delayed shards in [212.2micros] (7 delayed shards)
[2020-12-01T12:13:50,156][ERROR][o.e.x.i.IndexLifecycleRunner] [elastic-master] policy [filebeat] for index [filebeat-7.8.1-2020.12.01] failed on step [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}]. Moving to ERROR step
java.lang.IllegalArgumentException: index.lifecycle.rollover_alias [filebeat] does not point to index [filebeat-7.8.1-2020.12.01]
        at org.elasticsearch.xpack.core.ilm.WaitForRolloverReadyStep.evaluateCondition(WaitForRolloverReadyStep.java:104) [x-pack-core-7.8.1.jar:7.8.1]
        at org.elasticsearch.xpack.ilm.IndexLifecycleRunner.runPeriodicStep(IndexLifecycleRunner.java:173) [x-pack-ilm-7.8.1.jar:7.8.1]
        at org.elasticsearch.xpack.ilm.IndexLifecycleService.triggerPolicies(IndexLifecycleService.java:329) [x-pack-ilm-7.8.1.jar:7.8.1]
        at org.elasticsearch.xpack.ilm.IndexLifecycleService.triggered(IndexLifecycleService.java:267) [x-pack-ilm-7.8.1.jar:7.8.1]
        at org.elasticsearch.xpack.core.scheduler.SchedulerEngine.notifyListeners(SchedulerEngine.java:183) [x-pack-core-7.8.1.jar:7.8.1]
        at org.elasticsearch.xpack.core.scheduler.SchedulerEngine$ActiveSchedule.run(SchedulerEngine.java:211) [x-pack-core-7.8.1.jar:7.8.1]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
        at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]
        at java.lang.Thread.run(Thread.java:832) [?:?]
[2020-12-01T12:23:50,157][INFO ][o.e.x.i.IndexLifecycleRunner] [elastic-master] policy [filebeat] for index [filebeat-7.8.1-2020.12.01] on an error step due to a transitive error, moving back to the failed step [check-rollover-ready] for execution. retry attempt [13]
[2020-12-01T12:33:50,156][ERROR][o.e.x.i.IndexLifecycleRunner] [elastic-master] policy [filebeat] for index [filebeat-7.8.1-2020.12.01] failed on step [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}]. Moving to ERROR step
java.lang.IllegalArgumentException: index.lifecycle.rollover_alias [filebeat] does not point to index [filebeat-7.8.1-2020.12.01]
        at org.elasticsearch.xpack.core.ilm.WaitForRolloverReadyStep.evaluateCondition(WaitForRolloverReadyStep.java:104) [x-pack-core-7.8.1.jar:7.8.1]
        at org.elasticsearch.xpack.ilm.IndexLifecycleRunner.runPeriodicStep(IndexLifecycleRunner.java:173) [x-pack-ilm-7.8.1.jar:7.8.1]
        at org.elasticsearch.xpack.ilm.IndexLifecycleService.triggerPolicies(IndexLifecycleService.java:329) [x-pack-ilm-7.8.1.jar:7.8.1]
        at org.elasticsearch.xpack.ilm.IndexLifecycleService.triggered(IndexLifecycleService.java:267) [x-pack-ilm-7.8.1.jar:7.8.1]
        at org.elasticsearch.xpack.core.scheduler.SchedulerEngine.notifyListeners(SchedulerEngine.java:183) [x-pack-core-7.8.1.jar:7.8.1]
        at org.elasticsearch.xpack.core.scheduler.SchedulerEngine$ActiveSchedule.run(SchedulerEngine.java:211) [x-pack-core-7.8.1.jar:7.8.1]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
        at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]
        at java.lang.Thread.run(Thread.java:832) [?:?]
[2020-12-01T12:43:50,156][INFO ][o.e.x.i.IndexLifecycleRunner] [elastic-master] policy [filebeat] for index [filebeat-7.8.1-2020.12.01] on an error step due to a transitive error, moving back to the failed step [check-rollover-ready] for execution. retry attempt [14]
[2020-12-01T12:43:58,036][WARN ][o.e.t.TcpTransport       ] [elastic-master] invalid internal transport message format, got (3,0,0,2f), [Netty4TcpChannel{localAddress=/192.168.27.41:9300, remoteAddress=/94.102.50.103:64000}], closing connection

here's from data node

1.jar:7.8.1]
        at org.elasticsearch.action.support.replication.ReplicationOperation.execute(ReplicationOperation.java:109) [elasticsearch-7.8.1.jar:7.8.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.runWithPrimaryShardReference(TransportReplicationAction.java:374) [elasticsearch-7.8.1.jar:7.8.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.lambda$doRun$0(TransportReplicationAction.java:297) [elasticsearch-7.8.1.jar:7.8.1]
        at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.8.1.jar:7.8.1]
        at org.elasticsearch.index.shard.IndexShard.lambda$wrapPrimaryOperationPermitListener$24(IndexShard.java:2802) [elasticsearch-7.8.1.jar:7.8.1]
        at org.elasticsearch.action.ActionListener$3.onResponse(ActionListener.java:113) [elasticsearch-7.8.1.jar:7.8.1]
        at org.elasticsearch.index.shard.IndexShardOperationPermits.acquire(IndexShardOperationPermits.java:285) [elasticsearch-7.8.1.jar:7.8.1]
        at org.elasticsearch.index.shard.IndexShardOperationPermits.acquire(IndexShardOperationPermits.java:237) [elasticsearch-7.8.1.jar:7.8.1]
        at org.elasticsearch.index.shard.IndexShard.acquirePrimaryOperationPermit(IndexShard.java:2776) [elasticsearch-7.8.1.jar:7.8.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction.acquirePrimaryOperationPermit(TransportReplicationAction.java:836) [elasticsearch-7.8.1.jar:7.8.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.doRun(TransportReplicationAction.java:293) [elasticsearch-7.8.1.jar:7.8.1]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.8.1.jar:7.8.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction.handlePrimaryRequest(TransportReplicationAction.java:256) [elasticsearch-7.8.1.jar:7.8.1]
        at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:257) [x-pack-security-7.8.1.jar:7.8.1]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.8.1.jar:7.8.1]
        at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:315) [x-pack-security-7.8.1.jar:7.8.1]
        at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:63) [elasticsearch-7.8.1.jar:7.8.1]
        at org.elasticsearch.transport.TransportService$8.doRun(TransportService.java:801) [elasticsearch-7.8.1.jar:7.8.1]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:695) [elasticsearch-7.8.1.jar:7.8.1]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.8.1.jar:7.8.1]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]
        at java.lang.Thread.run(Thread.java:832) [?:?]
[2020-12-01T12:12:04,761][INFO ][o.e.c.s.ClusterApplierService] [elastic-beta] removed {{elastic-alpha}{U4P7Z35hR26j9Qac0cJ9pw}{UN2tdbCPQYe-9E5HPWTEhw}{192.168.27.42}{192.168.27.42:9300}{dlt}{ml.machine_memory=8200876032, ml.max_open_jobs=20, xpack.installed=true, data=warm, transform.node=true}}, term: 17, version: 18046, reason: ApplyCommitRequest{term=17, version=18046, sourceNode={elastic-master}{tiPY7f94T5CzZASR0QD6vA}{QdhWIPdMTH23oUy_WMS4tw}{192.168.27.41}{192.168.27.41:9300}{lm}{ml.machine_memory=8200876032, ml.max_open_jobs=20, xpack.installed=true, data=warm, transform.node=false}}

my master elastic.yml

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: elastic-cluster
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: elastic-master
node.master: true
node.voting_only: false
node.data: false
node.ingest: false
cluster.remote.connect: false
node.attr.data: "warm"
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 192.168.27.41
network.publish_host: 192.168.27.41
#
transport.host: 192.168.27.41
#
# Set a custom port for HTTP:
#
http.port: 9200
#
transport.port: 9300
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: ["192.168.27.41", "192.168.27.42", "192.168.27.43"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["192.168.27.41", "192.168.27.42", "192.168.27.43"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

here's from data server

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: elastic-cluster
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: elastic-alpha
node.master: false
node.voting_only: false
node.data: true
node.ingest: false
cluster.remote.connect: false
node.attr.data: "warm"
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 192.168.27.42
#network.publish_host: 192.168.27.41
#
transport.host: 192.168.27.42
#
# Set a custom port for HTTP:
#
http.port: 9200
#
transport.port: 9300
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: ["192.168.27.41", "192.168.27.42", "192.168.27.43"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["192.168.27.41", "192.168.27.42", "192.168.27.43"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
#discovery.zen.minimum_master_nodes: 2
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

here's from last node,

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: elastic-cluster
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: elastic-beta
node.master: false
node.voting_only: false
node.data: true
node.ingest: true
cluster.remote.connect: false
node.attr.data: "warm"
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 192.168.27.43
#network.publish_host: 192.168.27.41
#
transport.host: 192.168.27.43
#
# Set a custom port for HTTP:
#
http.port: 9200
#
transport.port: 9300
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: ["192.168.27.41", "192.168.27.42", "192.168.27.43"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["192.168.27.41", "192.168.27.42", "192.168.27.43"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
#discovery.zen.minimum_master_nodes: 2
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

i mean reset is every random day like 1 or 2 days after, my indices that show data log in discover suddenly deleted, and i dont know why. so it makes the data log from past days cant be seen from discover anymore. but when i open index management it says the ilm error. i already fix it but still , error.

Haiii

I'm a little confused here as you don't have any configs that are node.data=false and node.master=false.

1 Like

If you are just starting out with Elasticsearch I would recommend setting up a single node. This will by default have all roles. There is no need to start having nodes with dedicated roles for very small cluster IMHO. Your configuration also seems strange as I believe you have 2 Elasticsearch nodes but list three nodes in the config and list nodes that are not master-eligible as master nodes. If you want high availability later on you add two more nodes with all roles.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.