Document size, weight and performance in an automatic mapping and improve it afterwards manually

Is it possible to know the weight of a document in terms of bytes, to know the impact index in terms of indexing?

All this in order to better optimize, to know how to configure a mapping of fields in such and such a way?

in my case I have a log which is in JSON format, one line one log.
Not knowing at the beginning the various possible fields that will be in this log, I have it parsed by logstash to convert the json line into fields in elasticsearch, I leave elasticsearch in dynamic mode for the mapping for a trial period so that it make me the basis of all the fields during this period of time.
It will therefore create a mapping for me on the fly.
Then I would like to know if what it generated is wise, see if I can't improve because such and such an element should not be searchable, or put it only keyword etc...

sorry for my bad English

Bonjour Fabrice

I think that you can get the size of the source by using the mapper size plugin.

And may be: Index stats API | Elasticsearch Guide [8.7] | Elastic

Also the analyze disk usage API gives a picture of the disk usage of each field:

Moreover the field usage stats API can help you understand which parts of your mapping are unused:

I will test as soon as possible thank you

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.