I am trying to use Elasticsearch to store objects which all have similar fields and some field values are same in various objects. I have location specific data with me.
For ex. For a lot of objects a location field (city) would be same. In the end every object would be associated with location and neighbourhood and other localised features. I want to understand how to create indices properly wrt speed of search query and also respecting data storage issues.
According to me and i might be completely off track.
-Create a single index in which all objects would be stored and when we want to query we can filter the index based on various fields.
- Create indices based on locations and have objects inside them. This would speed up my query but would increase indices , I am not sure how the memory usage gets affected by this one.
Just to give it more perspective , the locations data i have currently is around 50-60 but might increase to 500 or more.
Currently i am getting more inclined towards the second strategy as it would speed up my search queries as well when i want to get aggregation stats but not sure how it would scale up efficiently.