Single Node Cluster

In my local elastic setup, if I set the setting
discovery.zen.ping.multicast.enabled: false
Elastic is not coming up.

After checking https://github.com/elastic/elasticsearch/issues/22909, I removed the setting.

Now, how to make the single node cluster?
Below are my elasticsearch.yml contents.

cluster.name: my_cluster
node.name: node-1

If try to access any index with out cluster.name appended it is working.
However if I append it, then I am getting index not found error.

What are the configuration changes need to be done, in order to access the index with cluster.name appended, any information is helpful.

My use case is to check my application how does it fair with cluster queries.

My Shards count is default of 5.

I am using ELK Version:5.4.0

Below is my cluster health report

curl -XGET http://localhost:9200/_cluster/health?pretty=true
{
  "cluster_name" : "my_cluster",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 5,
  "active_shards" : 5,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 5,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 50.0
}

I need to make single node cluster up and running for local testing purpose.

After checking this link https://stackoverflow.com/questions/19967472/elasticsearch-unassigned-shards-how-to-fix, I turned the cluster state from yellow to green by executing the below

curl -XPUT 'localhost:9200/_settings' -d '
{
    "index" : {
        "number_of_replicas" : 0
    }
}'

Below is my latest cluster state,

curl -XGET http://localhost:9200/_cluster/health?pretty=true
{
  "cluster_name" : "my_cluster",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 5,
  "active_shards" : 5,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

Still I am unable to access the index with cluster.name prefixed.

http://127.0.0.1:9200/my_cluster:hello_cluster/ 

I am new to ELASTIC, am I doing some thing incorrect here?

This is the expected behaviour. The URI you want is http://127.0.0.1:9200/<INDEX NAME>, there's no cluster name in there. Can you explain why you think this should work? It is mentioned in some documentation somewhere, for instance?

As per the cross cluster search (https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-cross-cluster-search.html), the cluster search required cluster.name need to be prefixed.
So I created single cluster and prefixing the cluster.name to index name and doing the search as below.

http://127.0.0.1:9200/my_cluster:hello_cluster/

The documentation you linked pertains to cross-cluster search: it is for when you have multiple clusters and you want to search across them all at once. It doesn't sound like this is what you are trying to do. If you are trying this out then, as it says at the top:

Cross-cluster search requires configuring remote clusters.

Ok, what I thought was, ELK knows the cluster details to which we are sending the request, Hence I appended the home cluster name and verifying (similar to remote clusters).

So the cluster to which we are sending the request, that cluster name should not be prefixed (some thing like home cluster) and need to add the names for the remote clusters alone (if required to be searched for).

Or alternatively can we configure in ELASTIC, all the clusters information (including home cluster and remote clusters) in all the clusters. Does this work? Does it proper to configure? Some thing like below,

PUT http://127.0.0.1:9200/_cluster/settings
Accept: */*
Cache-Control: no-cache
Content-type: application/json

{
  "persistent": {
    "search.remote": {
      "my_cluster": {
        "seeds": ["127.0.0.1:9300"]
      }
    }
  }
}

I'm confused about what you're trying to achieve. You access local indices with http://127.0.0.1:9200/<INDEX NAME>. If you have multiple clusters and have configured cross-cluster search correctly then you access remote indices by including the name of the remote cluster. But you seem to want to access local indices as if they were remote ones?

Ok. I want my elk cluster search query URI format to be unique be it local or remote cluster. Some thing like below,

http://127.0.0.1:9200/HomeCluster:<INDEX NAME>,RemoteCluster:<INDEX NAME>/_search

Hence the concern.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.