A request life in elasticsearch

Hey folks,

I know it's a very basic question to ask but I want to clear of certain concepts of mine related to how ES handles a request. So here it goes:

1.) INDEX/UPDATE REQUEST:
When I use a Java(client in any other language for that matter), I put all the data nodes(and client nodes if any) on which my data is allocated in the configuration. Say right now I have three data nodes. So when I send multiple request(single request multiple times or bulk request), the client distributes these requests in a round robin fashion to all the configured nodes. Each nodes decide which shard -> node this document should reside and sends the request to other node. This node now index the document to the shard it contains and once it writes to the shard it send it to the other node on which replica exists. Not sure of whose responsibility is it to send the request to that node containing replica and do the replication stuff.

2.) SEARCH REQUEST:
Again with client sending a request to all the configured nodes, the multiple search request are send to different data nodes on a round robin fashion. Each node parses the query it receives and it acts as the node distributing the request to all shards, gathering, combining the results and sends back the data to client.

Am I right with the concept?

Other questions:

1.) If I have a cluster with say 10 data nodes, do I have to mention all the nodes in the configuration of my client? That means whenever I add a node to cluster I have to change the configuration of my client. What if I use a load balancer of top of my ES nodes and they manage the request distribution to all the nodes of cluster?

2.) Does it mean more the number of nodes(client or data) better would be the performance of the insert/update and read?
3.) Does my java client talk to individual nodes and if yes is it a synchronous operation or an async operation?

Thanks. Basic questions but its better to always to clear of the basic concepts. :slight_smile:

Piyush

Indexing requests use a hash of a doc ID to determine the shard and forwards the request to the primary shard, from there to a quorum of replica shards on other nodes before answering. By default, it is the responsibility of the primary shard node to pass the requests to the replicas.

Search requests have different modes, the default mode for the client node that can access the cluster state is to dispatch the request only to the nodes where the shards of the indices involved in the search reside on (aka "broadcast shard mode"). In a second phase, after computing the scoring, only the nodes are requested to deliver the document hits that are required by the query (e.g. docs for the first ten top hits). This is known as "query then fetch".

The filtering of the relevant shards by matching hashes of doc IDs is known as shard routing. It can be override by custom shard routing.

The round-robin fashion you mentioned is just relevant if you have replica enabled.

In upcoming ES, the reparse of a query will be no longer necessary. Queries will be parsed once and passed as such to other nodes.

  1. No, one host is enough. In practice, you should use a small subset of hosts to connect a client to, to have failover. Not sure why you use a load balancer on top of ES. ES is very good in internal load balancing (except when you mess with shard allocations)

  2. The more nodes, the better ES scales, yes. Update has bad performance in general and is very different form insert.

  3. Java client can talk to all the nodes you want in the configuration, from there on, ES will take control and forward requests. All ES operations are asynchronous. Of course you can choose to execute them synchronized.

2 Likes