ElasticSearch when I index there is like a limit in which It stops indexing and increments the number of docs.deleted

Hi,
I have an Apliccation in which I have an Elasticsearch template and I Insert with the pattern of template+(yyyy-MM-dd),
Well, So far, so good, I think, but for each Index I suppose that have like a Limit in which when you reach you can't index more in that Index, and I test and I can Index in another index,but less and less so.
¿why I have like a limit per index?
¿how can I solve It?

health status index              uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .kibana            nP2yTlkuQxiBTFieX-K3wQ   1   0          2            1       14kb           14kb
green  open   log-dia-2021-11-14 DWz_kLNFRIeIun0a1PHcOQ   1   0     412420       151431    299.2mb        299.2mb
green  open   log-dia-2021-11-10 8E1ivgQ9Q2q3YOBldP-TnQ   1   0    1542586            0    766.7mb        766.7mb
green  open   log-dia-2021-11-12 AbVLbNdoSjSQ9THeUpP2hA   1   0     803606            0    420.3mb        420.3mb
green  open   log-dia-2021-11-16 lWh6uoW5QCOazvSsbnLQpQ   1   0     382653        11094    224.5mb        224.5mb
green  open   log-dia-2021-11-15 U2u4ZOCqTw6J9Hcn4H12VA   1   0     220177            0    120.2mb        120.2mb
green  open   log-dia-2021-11-9  ZcIC703RTuaZeTSXj0ilBg   1   0     981907            0    477.2mb        477.2mb
green  open   log-dia-2021-11-11 seXI_0D0TF2-IyLBJLzXYA   1   0    1119950        27668      586mb          586mb
green  open   log-dia-2021-11-08 PMAdP4ZpSqyg0gkgnOLg5Q   1   0    1489173       181958    798.7mb        798.7mb
green  open   log-dia-2021-11-13 pJuLYs8XTlKF8i7aPXM3CA   1   0     630403        72748    378.3mb        378.3mb

Best regards.
Julio

(post deleted by author)

How are you indexing into Elasticsearch? Are you by any chance specifying the document ID before indexing? The deleted documents can indicate updates, which can happen if multiple documents with the same ID are indexed.

Elasticsearch has a limit of around 2 billion documents per shard (not index) but at that point indexing results in an error and there are no deletes.

1 Like

Yeah, I specify the ID but each Id is a 4 UUID, the probability that they are being updated is practically 0, I think that is not the problem

If there are issues in the cluster this may force the sending party to have to resend bulk requests which in turn can result in updates that show up as deletes.

1 Like

And how can I detect if I have a problem in the cluster?,
Thanks

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.