Question about data model, I am using elasticsearch as a pure "JSON storage", no search, just index/update/get, with big document (JSON of 10 Mb), that contains a big nested array:
It's cool because I need all the cars of 1 document to render the web page, so a simple GET does the trick. But I see indexation is taking lot of memory (via VisualVM) and I see long GC pauses time to time (from elasticsearch log file).
I am wondering if elasticsearch fits well with such big-document or not ?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.