We have an index with the following structure (see below)
So basically, we have our index business_objects with a link and no. We add objects to this index (doc_type1, doc_type2, ....). These objects are linked via no 5793328-03-01 for example.
The problem is that we have many doc_types (doc_type1, doc_type2, …., doc_type100).
Each doc_type has some common fields like date, id, mime_type but there are also some doc_type specific fields like business_number,…
So at the moment this increases the fields in the index if we add a new doc_type eg doc_type101.
We are well over 1000 fields now.
How could we redesign this index so we stay below the recommended 1000 fields?
We were thinking of only having one doc_type and rely on the doc_type.type to determine the type of the doc_type.
This 'generic' doc_type would contain the sum of all fields in (doc_type1, doc_type2, …., doc_type100).
The problem is that we cannot use source filtering and security filtering like this anymore if we do that:
"except": [
"link.doc_type_1.*”,
Are there any other solutions for this type of index design?
"_index" : "business_objects",
"_type" : "_doc",
"_id" : "8579338-3",
"_score" : 1.0,
"_source" : {
"business_object_no" : "8579338-3",
"company" : {
"id" : "148",
"call_sign" : “NAME”
},
"link" : [
{
"no" : "5793328-03-01",
"doc_type_1" : [
{
"date" : "20220301000000",
"filename" : "test1.pdf",
"official_number" : "2022000000013",
"mime_type" : "application/pdf",
"id" : "5009467F-0000-C91C-B281-BBBBZA",
"type" : "doc_type_1",
"business_number" : "22000766"
}
],
"doc_type_2" : [
{
"date" : "20220301000000",
"filename" : "test2.pdf",
"official_number" : "RE02022000000013",
"mime_type" : "text/xml",
"id" : "800D467F-0000-C314-88D3-AAAAZE",
"type" : "doc_type_2"
}
]
}
}