Performance penalty of many multi-fields with chains of normalizers

Hi,
I wanted to tidy up one index where several fields were created in application just for some specific searches. The content of those fields is normalized (some text removed, some replaced). So the application did the normalization for all the search fields in advance and ES only stored analyzed the text using some predefined ngram analyzer.

I wanted to use the multi-fields to tidy up the index of one fields, which now has 7 subfields, each tailored for some specific task (sorting, searching, searching by regex, case-insensitive searches etc).

I've been able to transform the text using a mixture of normalizers and char_filters. In some cases I've ended up with an array of 4 char_filters, all using regex (not too complicated).

I have 4 fields with multifields each has 6-7 subfields, most of them are normalized by a chain of 4 normalizers.

The obvious downside is that I cannot normalize the text for the subfields and ES does the normalization for each subfield separately, but I wasn't expecting a huge penalty.

When this was deployed, the load on ES increased a lot, write queue filled quickly and indexing time increased.

Is this an overuse of multi-fields? Any advice?

I suppose best is to give control over the normalization back to the application.

Thanks a lot for any pointers!