100k documents does not sound like a lot of data. I would recommend using a larger data set and make sure that you also index into a single shard as compression efficiency generally improves with shard size. Also force merge down to a single segment once you are done to get a fair comparison.
This old blog post shows how we did a similar comparison for a much older version of Elasticsearch.
It is also worth noting that altering the settings you described will have an impact on how you can query and potentially also reprocess your data, so make sure any side effects of mapping changes are acceptable to your use case.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.