As I understand things the mapping definition doesn't distinguish between arrays or single values. In other words if I have a mapping of "myfield": { "type": integer } then myfield could contain either 1, [1] or [1,2,3]. Is my understanding correct?
Is it possible to be explicit? In other words to throw a mapping error if I try and put [1,2,3] where I expect a single value?
If this is not possible can I ask for the reasoning behind the decision to treat single values and arrays in this way? (As it may help me better form my design of a feature I'm currently working on)
I don't know the reasoning of the original author but when I justify this to myself I think of it as a quirk of how Elasticsearch abstracts over Lucene which is doing the indexing and storage. In Lucene documents don't have nested structure - they only have fields that have values and fields can have as many values as you'd like. Given that Lucene works that way Elasticsearch be adding an unnecessary constraint if it forced you to decide single- or multi-valued-ness up front. And Elasticsearch isn't typically in the business of adding constraints unless it has to.
Ah, The relationship to Lucene makes some sense - but it still seems strange that if I can specify a type and have ES enforce that that it won't enforce single vs. multi value.
It's certainly not a deal breaker - but it's a quirk I'll have to handle client side.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.