Is it possible to ignore documents that exceeds index.mapping.nested_objects.limit

Hi,

we have a transform that writes into a destination index, that includes a nested field.
the transform failed because of the index.mapping.nested_objects.limit.
is there a way to ignore/processing/writing/updating docs that exceeds this limit ? to prevent this failure again.

Failed to index documents into destination index due to permanent error: [BulkIndexingException[Bulk index experienced [1] failures and at least 1 irrecoverable [TransformException[Destination index mappings are incompatible with the transform configuration.]; nested: MapperParsingException[The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.];; org.Elasticsearch.index.mapper.MapperParsingException: The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]. ]; nested: TransformException[Destination index mappings are incompatible with the transform configuration.]; nested: MapperParsingException[The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.];; TransformException[Destination index mappings are incompatible with the transform configuration.]; nested: MapperParsingException[The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.];; org.Elasticsearch.index.mapper.MapperParsingException: The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]

thanks

We advise to let transform write into a dedicated destination index. It seems to me that your problem originates from writing into a shared index. Does transform write those nested docs?

You might be able to drop documents using an ingest pipeline as output of the transform. If you know which documents are problematic, e.g. by using a drop processor.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.