Indexes getting deleted if deleted from one cluster

Hi,

in my current system i am using a single log stash configuration to write into two different Elasticsearch cluster, one is on version 7.5.1 and another is on version 7.17.7 using the following configuration file: -

input {
    file {
        start_position => "beginning"
        path => "C:/Softwares/tools/spring.log"
        sincedb_path => "NUL"
    }
}



filter {
    json {
        source => "message"
    }
}



output {
    elasticsearch {
        hosts => ["http://localhost:9201","http://localhost:9203"]
        index => "application_logs_1"
    }
    stdout {}
}

OR

input {
    file {
        start_position => "beginning"
        path => "C:/Softwares/tools/spring.log"
        sincedb_path => "NUL"
    }
}



filter {
    json {
        source => "message"
    }
}



output {
    elasticsearch {
        hosts => ["http://localhost:9201"]
        index => "application_logs_1"
    }
    elasticsearch {
        hosts => ["http://localhost:9203"]
        index => "application_logs_2"
    }	
    stdout {}
}

when i delete any one of the index using postman request to delete index, that index is deleted from both the cluster.
wanted to know why is it getting deleted from both elastic cluster?
and
is their any way that we can restrict this?

This is not possible, clusters are independent from each other.

Do you really have two independent clusters or do you have two nodes that are in the same cluster?

Can you provide more context? Please share the elasticsearch.yml from both of your servers.

Another thing is this configuration:

    elasticsearch {
        hosts => ["http://localhost:9201","http://localhost:9203"]
        index => "application_logs_1"
    }

This configuration assumes that both elasticsearchs on port 9201 and 9203 are in the same cluster, not different clusters.

So this complete setup is on my local machine , it just that i am running them on different ports

-----> for old cluster i have version 7.5.1 i.e running on port 9201

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
http.port: 9201
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

-----> for new cluster i have version 7.17.7 i.e running on port 9203

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
#network.host: 192.168.0.1
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
http.port: 9203
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
#
# ---------------------------------- Security ----------------------------------
#
#                                 *** WARNING ***
#
# Elasticsearch security features are not enabled by default.
# These features are free, but require configuration changes to enable them.
# This means that users don’t have to provide credentials and can get full access
# to the cluster. Network connections are also not encrypted.
#
# To protect your data, we strongly encourage you to enable the Elasticsearch security features. 
# Refer to the following documentation for instructions.
#
# https://www.elastic.co/guide/en/elasticsearch/reference/7.16/configuring-stack-security.html

what should be my configuration? if i wish to configure these two as different individual cluster
should it look like


output {
    elasticsearch {
        hosts => ["http://localhost:9201"]
        index => "application_logs_1"
    }
    elasticsearch {
        hosts => ["http://localhost:9203"]
        index => "application_logs_2"
    }	
    stdout {}
}

First you need to confirm that you have indeed two different clusters, you said that making a request to delete an index in one cluster, it is deleted in the other, but this is not possible with elasticsearch, so something is wrong.

How did you installed Elasticsearch in your server and how are you running it? Do you have Kibana installed?

Make the following requests to your clusters and share the results.

GET http://localhost:9201/_cluster/health?pretty
GET http://localhost:9201/_cat/health?v

And

GET http://localhost:9203/_cluster/health?pretty
GET http://localhost:9203/_cat/health?v
GET http://localhost:9201/_cluster/health?pretty

{
    "cluster_name": "elasticsearch",
    "status": "yellow",
    "timed_out": false,
    "number_of_nodes": 2,
    "number_of_data_nodes": 2,
    "active_primary_shards": 1,
    "active_shards": 1,
    "relocating_shards": 0,
    "initializing_shards": 0,
    "unassigned_shards": 1,
    "delayed_unassigned_shards": 0,
    "number_of_pending_tasks": 0,
    "number_of_in_flight_fetch": 0,
    "task_max_waiting_in_queue_millis": 0,
    "active_shards_percent_as_number": 50.0
}

GET http://localhost:9201/_cat/health?v

epoch      timestamp cluster       status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1668687350 12:15:50  elasticsearch yellow          2         2      1   1    0    0        1             0                  -                 50.0%


and


GET http://localhost:9203/_cluster/health?pretty

{
    "cluster_name": "elasticsearch",
    "status": "yellow",
    "timed_out": false,
    "number_of_nodes": 2,
    "number_of_data_nodes": 2,
    "active_primary_shards": 1,
    "active_shards": 1,
    "relocating_shards": 0,
    "initializing_shards": 0,
    "unassigned_shards": 1,
    "delayed_unassigned_shards": 0,
    "number_of_pending_tasks": 0,
    "number_of_in_flight_fetch": 0,
    "task_max_waiting_in_queue_millis": 0,
    "active_shards_percent_as_number": 50.0
}


GET http://localhost:9203/_cat/health?v

epoch      timestamp cluster       status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1668687447 12:17:27  elasticsearch yellow          2         2      1   1    0    0        1             0                  -                 50.0%

See, you do not have two independent clusters, you have just one cluster composed of your two nodes.

The cluster name is the same, and both the requests shows that you have 2 nodes.

You will need to recreate both of your clusters, and set a differente cluster.name, node.name and path.data for each one of them in your elasticsearch.yml.

1 Like