Performance tuning in elastic search in application level joins

  • I have 2 tables which I have to join based on 3 common keys I'm trying for application level joins

  • But it is getting more time in execution and dataset size is kind of large around 3-4 million

  • Can you tell me how can I improve the performance of this?

  • The purpose of me using Elasticsearch is for faster search and faster match_phrase query result

A common way to improve performance is to denormalise the data model in order to avoid any kind of joins. How this is done will naturally depend on the nature and size of the data as well as how frequently the different types of data are added, deleted and/or updated.

data size is large for . In denormalisation if there is a change in data in my normal table I have to make changes to the denormalised table also right?

What are the sizes of the two indices in terms of documents and GB? What is the type of data in the two indices? What is the average size of the documents in the two indices? How often are documents in the two indices updated?

1 table is having around 0.67 gb data and other table is having around 0.24 gb data. the change will be kind of often. 1 table contains the data of user and their entitlements (like manager, representative). other contains the transactions which the user based on entitlements can see.

If you want to optimise query performance you often have to instead to more work when updating data. As those tables/indices are very small I would recommend denormalising the data if search performance is important.

Hi can we connect? ig then I will be explain you more exactly what I'm trying to achieve

Please explain it here so the community can benefit.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.