i have a exists index and more type( greater than 10000+)。
i trying put mapping to a type.
its work,but is very slow(greater than 10s+).
its problem ?
how to fix it ?
i have a exists index and more type( greater than 10000+)。
i trying put mapping to a type.
its work,but is very slow(greater than 10s+).
its problem ?
how to fix it ?
That sounds like an abuse of types to be honest. You should not have that many. Each change to a mapping for a type will require the cluster state to be updated. With that many types your mappings are going to be very large, which will cause a lot of data to be distributed across the cluster for every change. This is generally worse in Elasticsearch 1.x as delta cluster state updates were not available until Elasticsearch 2.x, but with that number of types I would not be surprised if you had issues even in more recent versions.
In my knowledge inside,
i think elasticsearch mapping to mysql is:
index -> database
type -> table
its worng ?
udpate a mapping of type why need to change each type ?
Elasticsearch is not a relational system, so there is no natural direct mapping to relational database concepts. You need to model data in Elasticsearch based on how you want to be able to query it. This talk and this chapter from the Definitive Guide might be useful.
Which version of Elasticsearch are you using?
ES 2.x
Then I think the only way is to change the way you model your data and avoid the excessive use of types.
now,
i will migrate each type to indices ?
so,we have too many indices(10K+)。its ok ?
Having a very large number of small indices/shards is can also be very inefficient as it uses up a lot of resources and also results in very large mappings, especially for a small cluster. Can you tell us a bit more about the size and nature of your data? What is it you are looking to achieve?
in this blog. https://www.elastic.co/blog/found-multi-tenancy
tell us use multi indices for multi tenancy.
we have about
500G data
32G*6 memory
8 *6 CPU Cores.
10k+ indices(a business user is a indices).
It may work as long as you have a large enough cluster and make sure that you only have 1 shard and 1 replica per index. In addition to giving the one user her index example, the blog post also points out the issues with having very large number of indices and shards and how this will not scale as each shard has a certain amount of overhead.. 10000 users will with this model give you 10000 indices and 20000 shards, which is quite a lot but might work on a reasonably sized cluster. I would however be concerned with this model if this number is expected to grow considerably as having lots of very small shards is very inefficient. You may want to read this blog post as well.
tanks,
i will be testing this model(one user one indices) in my real case.
and i will report this topic if its has problem.
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.