Deduplication in beats while sending logs to Logstash or Elasticsearch

Hi,

I am sure this point would have been considered already, but since I could not find any reference to this, asking it here.
When various beats send log data to Logstash/Elasticsearch data for most of the fields from a beat running on a server will be same across all the log entries sent. e.g. I am pasting only a small part of the JSON doc from filebeat

"agent": {
  "hostname": "ip-172-31-15-75",
  "id": "e216e8d1-ab64-4fe2-9337-2f3517814da9",
  "ephemeral_id": "7a9604ba-2192-496d-bd07-3ac1fc3cfb67",
  "type": "filebeat",
  "version": "7.4.0"
},
"user_agent": {
  "original": "Go-http-client/1.1",
  "name": "Other",
  "device": {
    "name": "Other"
  }
},
"fileset": {
  "name": "access"
},
"url": {
  "original": "/server-status/?auto"
},

As anyone can observe these fields/data will be same across the log entries that are continuously sent by filebeat. Is there any current feature or planned development to make use of deduplication of such recordsets while sending this data? This will save lot of bandwidth as well as storage space. At the target [logstash / elasticsearch] it can expand on this data while displaying

Thanks,
nindate