Curl call for PUT and GET query on multi-node es cluster

Hi Team,

I would like to know on a multi-node Elasticsearch cluster (3 nodes), to which node we can send PUT curl call (say for creating ILM policy) or to fetch some results (by running query)?

In a single node cluster, it is clear that we need to send all create and GET requests to single node but what in the case of multiple node cluster?

Below is the example of single node cluster using ansible tool for creating ILM policy but what we should do in case of multi node cluster?

- name: creating ILM policy 
  uri:
    method: PUT
    url: "http://{{ elasticsearch1_private_ip }}:{{ elasticsearch_port }}/_ilm/policy/testpolicy"
    body: "{{ lookup('file', '{{ file  }}/policy.json') }}"
    body_format: json
    user: "{{ elasticsearch_username }}"
    password: "{{ elasticsearch_password }}"
    status_code: 200

GET request -

For fetching the results of query, i read somewhere in Elasticsearch documentation (not able to find now) that we should not send request directly to any es node and instead should sent to a load balancer kind of thing that will forward the request to one of the es node at backend.

Create request -

Can we send PUT curl call to one of the es node and will it create the ILM policy in the whole cluster? i.e will one node take care of making other two aware of this change and will this get spread to the cluster wise?

Thanks,

You can use any node at all for this. And because all the nodes are clustered, then changes apply to all nodes in said cluster.

Hi @warkolm,

Thanks for prompt response.

What you said is correct for creating something in Elasticsearch.

Is above said for GET request is true?

Thanks,

Yes.

@warkolm

Thank you.

Hi @warkolm,

Adding to above, i have deployed es on multi-node cluster and this is how it configured logstash.yml file.

input {
  beats {
    port => 5044
  }
}
filter {
if [log_type] == "access_server" and [app_id] == "pa"
  {
    grok { match => { "message" => "%{YEAR}-%{MONTHNUM}-%{MONTHDAY}[T ]%{HOUR}:%{MINUTE}(?::?%{SECOND})\| %{USERNAME:exchangeId}\| %{DATA:trackingId}\| %{NUMBER:RoundTrip:int}%{SPACE}ms\| %{NUMBER:ProxyRoundTrip:int}%{SPACE}ms\| %{NUMBER:UserInfoRoundTrip:int}%{SPACE}ms\| %{DATA:Resource}\| %{DATA:subject}\| %{DATA:authmech}\| %{DATA:scopes}\| %{IPV4:Client}\| %{WORD:method}\| %{DATA:Request_URI}\| %{INT:response_code}\| %{DATA:failedRuleType}\| %{DATA:failedRuleName}\| %{DATA:APP_Name}\| %{DATA:Resource_Name}\| %{DATA:Path_Prefix}" } }
    mutate {
             replace => {
               "[type]" => "access_server"
             }
           }
  }
output {
  if [log_type] == "access_server" {
  elasticsearch {
    hosts => ['http://10.10.10.242:9200', 'http://10.10.10.243:9200', 'http://10.10.10.244:9200' ]
        user => elastic
    password => "${es_pwd}"
     index => "access"
     template_name => "access"
     template_overwrite => "false"
      }
 }
 elasticsearch {
    hosts => ['http://10.10.10.242:9200', 'http://10.10.10.243:9200', 'http://10.10.10.244:9200' ]
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM}"
    user => elastic
    password => "${es_pwd}"
  }
}

I have tested similar above config file on single es server (i.e only single server was mentioned in hosts => and data was getting indexed), but on current multi-node cluster after deploying es data is not getting indexed.

Is mentioning all three e.s hosts is wrong and only need to mentioned? as you said above?

HI @warkolm,

Can you update on last Q. Thanks

Please start a new topic for that question.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.