Adding mode nodes to cluster slow down search performance

Hi,

Summary
I have 2 questions

  1. When i add more nodes to the cluster, simple search performance slow down. Can you please help to achieve this problem ? I shared details at the bottom.

I want to know that elasticsearch capable for 50K (with 50 K concurrent client with 1 search) simple term search request in a second with 50 mb data (300k row approx) with term search or not ; so i am trying to test it. And i want to use it like cache & search together.

I use following command for loadtest. Loadtest servers different than elastic nodes.

loadtest -n 50000 -c 2000 -k http://myserver:9200/myindex/_doc/_search/?_source=false&request_cache=true&q=kategori_Id=12

  1. If my primary master node server down but others live in the cluster, how client know other servers to access ? For example i have A,B,C servers and elasticsearch crashed on A(for example server is down - not only elastic app), but i am using A ip no from my clients, so how my client know other nodes to access to continue operations ?

Details

A) Given

A1. Configuration

  1. I created 3 node Elasticsearch cluster on the one of known cloud platform. Each node has 2cpu 7.5gb memory

  2. I updated jvm.options as -Xms4g -Xmx4g for all nodes

  3. I updated /etc/elasticsearch/default as for all nodes

    MAX_LOCKED_MEMORY=unlimited
    MAX_OPEN_FILES= 131070 
    MAX_MAP_COUNT=262144
    
  4. I updated elasticsearch.yml as for all nodes

     index.store.preload: ["nvd", "dvd", "tim", "doc", "dim"]
     indices.requests.cache.size: 512M
     indices.queries.cache.size: 512M
     indices.fielddata.cache.size: 512M
    
     thread_pool:
         search:
             size: 600
             queue_size: 10000
             min_queue_size: 10000
             max_queue_size: 50000
             auto_queue_frame_size: 200
             target_response_time: 1s
    
       cluster.routing.allocation.disk.threshold_enabled: false
       bootstrap.memory_lock: true
    

A2. Data

  1. I created index and settings with schema as following

      "name" : {
      "type" : "text"
      },
      "kategori_Id" : {
      "type" : "integer"
      }
    
    
    
    "myindex" : {
      "settings" : {
        "index" : {
          "number_of_shards" : "2",   (I updated this value if i use single node, it is 1 , if 2 or 3 node, it is 2)
         "provided_name" : "myindex",
     "creation_date" : "1544329622243",
     "requests" : {
       "cache" : {
         "enable" : "true"
       }
     },
     "number_of_replicas" : "1",
     "queries" : {
       "cache" : {
         "enabled" : "true"
       }
     },
     "uuid" : "Su7m9L6yQHGKkMTOZcF4tw",
     "version" : {
       "created" : "6050299"
          }
        }
      }
    }     
    

    }

  2. I index 50 mb data - document with this schema

  3. From now on, i dont index anything anymore, i am just testing search for performance; during search, no any other task happening (indexing etc)

B) When - Then
I am doing load tests with npm loadtest package from 3 seperate server same time.

B1. When 1 node server active, others passive (shut down)

  • When i create URI term search query, I can create 3000 concurrent request without problem

B2. When 2 node server active, other 1 node passive (shut down)

  • When i create URI term search query, I can create 2500 concurrent request without problem

B3. When 3 node cluster active, (worst one)

  • When i create URI term search query, I can create 1800 concurrent request without problem

I really need solution for that problem; is there anyone ? May be you can only give me idea about how to research this problem for solution (i m not native english speaker). can you give me more spesific search advice for my question ?

Thanks.

Read this and specifically the "Also be patient" part.

It's fine to answer on your own thread after 2 or 3 days (not including weekends) if you don't have an answer.

The client usually has a list of all available nodes and handles failover. You can also use a load balancer if your client is not able to handle this.

This seems excessive. Where did you get this from??

I would recommend starting with a cluster with default configuration and then gradually increase the n umber of concurrent queries until you see how many the cluster can handle at the latency you require. Higher values in the config does not necessarily mean better performance. It can often be the complete opposite.

While you are testing, monitor resource usage so you can see what is limiting performance. As you data is likely to be cached, it is not unlikely that you will be limited by CPU.

1 Like

Also your test is hitting just one of your nodes, which is then having to act as a coordinating node, routing requests to the other nodes and collating the results on the way back. This takes some effort, and is unrealistic: in a real deployment you would be sending requests to multiple nodes simultaneously, spreading out the cost of coordination.

1 Like

Hi, thanks for the point, and sorry for the hurry. i read it again and i could not find just support service with sla, is there any; because i could not find. I want to create my own cluster on my own servers but i want to know if there is support service with sla pricing.. If is there any service like that, can you please share it ? (for perosnal subscriptions; not enterprise)

Hi Christian,

"Load balancer", it is exactly what i was looking for, thanks for clarification.

About configuration:

Actually i read elastic documents and i apply these informations on my macbook pro; i test many different scnearios and that configuration resulting best on my laptop; but i am reading reading and learning more...

  • Yes, documents said that thread count just trashes the cpu if you dont use it right way, thanks for good advice
  • Queue limits high because i am creating requests with high number of concurrency; it works great.

i will start from the beginning and test all scnearios until find best result as you adviced; thanks again.

Hi David, thanks for great advice. I really shocked about many posts on the internet does not touch that point even it is most important one if the topic is high concurrency and scalability.

Now i feel comfortable to sending request separate nodes.

Is there any official documentation about that point ? I could not found anything about it in official documentation, if there is can you please share a link ? I mean the point is, Elasticsearch cluster is for consistency; not for handling and distributing high concurrent requests; so we need to do it with other ways; as you described also; but if there is more detailed documentation about this kind of things, i want to know about them.

Thank you very much.

For instance there's the section on coordinating-only nodes as well as this note and this paragraph on the same page:

All nodes know about all the other nodes in the cluster and can forward client requests to the appropriate node.

But let me turn your question around: where are you seeing any indication that you shouldn't send requests to all nodes in parallel?

I don't understand this point. Elasticsearch is certainly designed for handling lots of concurrent requests.

Yes, see Subscriptions | Elastic Stack Products & Support | Elastic. The Gold and Platinum levels come with support.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.