Field data type is 'ip' but still fielddata is load during query

Hello everyone,
I have data stored in elasticsearch. In documents i have multiple fields in which i have source and destination field with datatype ip.Please find below output of

GET index-name/_mapping/

Sorry can't share full mapping because the list is very lengthy.

{
        "source": {
          "type": "ip"
        },
        "destination":{
          "type":"ip"
        }
}

Now in my elasticsearch cluster i am frequently experiencing circuit breaker exception

Caused by: org.elasticsearch.common.breaker.CircuitBreakingException: [parent] Data too large, data for [indices:data/write/bulk[s]] would be [32045611070/29.8gb], which is larger than the limit of [31621696716/29.4gb], real usage: [32045606600/29.8gb], new bytes reserved: [4470/4.3kb], usages [fielddata=15790595258/14.7gb, request=36864/36kb, inflight_requests=488696/477.2kb, model_inference=0/0b, eql_sequence=0/0b]

I am getting fielddata usage 14GB but in my mapping i haven't enabled field data to true.

Now when i ran below command

GET /_cat/fielddata?v=true

what i get that destination field size is 5.5gb.

Can someone please help what could be the issue here.
If there is any more information required please tell me.

are you sure there is a) there are not other indices with a field called destination and b) the mapping type for all fields called destination is "ip", across all indices in the cluster.

That said, the error:

Data too large, data for [indices:data/write/bulk[s]] would be [32045611070/29.8gb],

What were you/your team trying to do at the time that specific error was generated ?

Hello Sir,
a) I have mutiple indices like spark--* pattern in which both source and destination fields are there.
b) yes i have checked the data mapping.

In our cluster data injection is continuous and at that time aggregation query on destination field was executed.