But if the number of elements in array are more in that case there are so many different documents. So having different logs is not suitable for the usecase. In this case the hits should be more for the single logs that might affect the result on the kibana.
Instead if the same log can be processed and we are able to store all those key values in the same log.
As key name is same I am thinking to use to iterate through the loop and append it with name of key with the value.
If it can be possible that will be good for my usecase.
Or if there is any another approch which I can try please suggest.
This could lead to having a lot of fields in your index which may have impact in performance.
Also, how are you planning to use this data? For example, if you append the name of the key to the fields of failedCount and successCount you won't be able to plot a graphic comparing which API has more fails or success because the field name is different.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.