How to send Json logs to Elastic Search using File Beats without extra fields

Hi there,

I am trying to send JSON logs to Elastic Search using file beats. My logs file looks like this

{"timestamp":1581386084780,"message":"User 'Test' connected","eventId":107,"metadata":{"userID":"Test","serviceID":"instance-6"}}

I am using file beat to read this log and send it to elastic search. Below is the Filebeat config

<
filebeat.inputs:

  • type: log
    enabled: true
    paths:
    • C:/Users/Ashish/Downloads/logs/test/audit.log
      json.keys_under_root: true
      json.add_error_key: true

setup.template.name: "auditbeatasdsadsa-%{[beat.version]}"
setup.template.pattern: "auditbeatdsadsa-%{[beat.version]}-*"
setup.ilm.overwrite: true
setup.ilm.enabled: auto
setup.ilm.rollover_alias: "auditbeatasdsad-%{[beat.version]}"
setup.ilm.pattern: "{now/M{yyyy.MM}}-000008"

output.elasticsearch:
hosts: ["http://localhost:9200"]
template.name: filebeat
template.path: filebeat.template.json
/>

I am able to process logs with this configuration, but when I am viewing this data in Kibana. There are so many extra fields that are automatically generated by filebeat. Is there is any way we can control these extra fields?
<
{
"_index": "filebeat-7.8.0-2020.07.05-000001",
"_type": "_doc",
"_id": "oR0CIHMB6CBCVUUG-yDL",
"_version": 1,
"_score": 1,
"_source": {
"@timestamp": "2020-07-05T17:25:31.415Z",
"timestamp": 1581411268592,
"message": "Performing search for http traffic information over a 120h interval",
"metadata": {
"serviceID": "instance-4",
"userID": "monitor"
},
"host": {
"name": "LAPTOP-I0BND0BP"
},
"agent": {
"ephemeral_id": "22074664-b74e-4b70-b6dc-8d68d6212953",
"id": "91f3c264-5bbf-4aed-b63d-d092ca6f3f4b",
"name": "LAPTOP-I0BND0BP",
"type": "filebeat",
"version": "7.8.0",
"hostname": "LAPTOP-I0BND0BP"
},
"log": {
"offset": 8190,
"file": {
"path": "C:\Users\Ashish\Downloads\logs\test\audit.log"
}
},
"eventId": 9,
"input": {
"type": "log"
},
"ecs": {
"version": "1.5.0"
}
},
"fields": {
"cef.extensions.flexDate1": ,
"netflow.flow_end_microseconds": ,
"netflow.system_init_time_milliseconds": ,
"netflow.flow_end_nanoseconds": ,
"misp.observed_data.last_observed": ,
"netflow.max_flow_end_microseconds": ,
"file.mtime": ,
"aws.cloudtrail.user_identity.session_context.creation_date": ,
"netflow.min_flow_start_seconds": ,
"misp.intrusion_set.first_seen": ,
"file.created": ,
"misp.threat_indicator.valid_from": ,
"process.parent.start": ,
"azure.auditlogs.properties.activity_datetime": ,
"crowdstrike.event.ProcessStartTime": ,
"zeek.ocsp.update.this": ,
"crowdstrike.event.IncidentStartTime": ,
"netflow.observation_time_microseconds": ,
"event.start": ,
"cef.extensions.agentReceiptTime": ,
"cef.extensions.oldFileModificationTime": ,
"checkpoint.subs_exp": ,
"event.end": ,
"netflow.max_flow_end_milliseconds": ,
"netflow.min_flow_start_nanoseconds": ,
"zeek.smb_files.times.changed": ,
"crowdstrike.event.StartTimestamp": ,
"netflow.flow_start_nanoseconds": ,
"netflow.flow_start_seconds": ,
"crowdstrike.event.ProcessEndTime": ,
"zeek.x509.certificate.valid.until": ,
"misp.observed_data.first_observed": ,
"netflow.exporter.timestamp": ,
"netflow.monitoring_interval_start_milli_seconds": ,
"cef.extensions.oldFileCreateTime": ,
"event.ingested": ,
"@timestamp": [
"2020-07-05T17:25:31.415Z"
],
"zeek.ocsp.update.next": ,
"crowdstrike.event.UTCTimestamp": ,
"tls.server.not_before": ,
"cef.extensions.startTime": ,
"netflow.min_flow_start_milliseconds": ,
"azure.signinlogs.properties.created_at": ,
"cef.extensions.endTime": ,
"suricata.eve.tls.notbefore": ,
"zeek.kerberos.valid.from": ,
"cef.extensions.fileCreateTime": ,
"misp.threat_indicator.valid_until": ,
"crowdstrike.event.EndTimestamp": ,
"misp.campaign.last_seen": ,
"cef.extensions.deviceReceiptTime": ,
"netflow.observation_time_seconds": ,
"crowdstrike.metadata.eventCreationTime": ,
"cef.extensions.fileModificationTime": ,
"tls.client.not_before": ,
"zeek.smb_files.times.created": ,
"zeek.smtp.date": ,
"netflow.collection_time_milliseconds": ,
"zeek.pe.compile_time": ,
"netflow.max_flow_end_seconds": ,
"tls.client.not_after": ,
"netflow.flow_start_milliseconds": ,
"event.created": ,
"package.installed": ,
"zeek.kerberos.valid.until": ,
"suricata.eve.flow.end": ,
"netflow.observation_time_milliseconds": ,
"netflow.flow_start_microseconds": ,
"tls.server.not_after": ,
"netflow.flow_end_seconds": ,
"process.start": ,
"suricata.eve.tls.notafter": ,
"zeek.snmp.up_since": ,
"azure.enqueued_time": ,
"netflow.max_flow_end_nanoseconds": ,
"misp.intrusion_set.last_seen": ,
"netflow.min_flow_start_microseconds": ,
"netflow.observation_time_nanoseconds": ,
"cef.extensions.managerReceiptTime": ,
"file.accessed": ,
"netflow.flow_end_milliseconds": ,
"misp.campaign.first_seen": ,
"netflow.min_export_seconds": ,
"suricata.eve.flow.start": ,
"suricata.eve.timestamp": [
"2020-07-05T17:25:31.415Z"
],
"cef.extensions.deviceCustomDate1": ,
"cef.extensions.deviceCustomDate2": ,
"netflow.monitoring_interval_end_milli_seconds": ,
"file.ctime": ,
"crowdstrike.event.IncidentEndTime": ,
"zeek.smb_files.times.accessed": ,
"zeek.ocsp.revoke.time": ,
"zeek.x509.certificate.valid.from": ,
"netflow.max_export_seconds": ,
"zeek.smb_files.times.modified": ,
"kafka.block_timestamp": ,
"misp.report.published":
}
}
/>
How to remove these extra fields in fields tag, so that i can have only required fields.

Could you please format your configuration </>?

    filebeat.inputs:
    - type: log
      paths:
       -  C:/Users/Ashish/Downloads/Basefarm_audit_logs/Basefarm_audit_logs/OAG/audit.log
      json.keys_under_root: true
      json.message_key: event
      json.add_error_key: true
       
    output.elasticsearch:
      hosts: ["http://localhost:9200"]
filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  paths:
   -  C:/Users/Ashish/Downloads/Basefarm_audit_logs/Basefarm_audit_logs/OAG/audit.log
  
  json.keys_under_root: true
  json.message_key: event
  json.add_error_key: true
      
output.elasticsearch:
  hosts: ["http://localhost:9200"]

You should use the drop_fields processor to remove unwanted fields.

Please search for drop_fields in https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-reference-yml.html
You will probably be adding something provided below

#================================ Processors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

    processors:
      - drop_fields:
          fields: ["host.name", "ecs.version", "agent.version", "agent.type", "agent.id", "agent.ephemeral_id", "agent.hostname", "input.type"]
    #  - add_host_metadata: ~
    #  - add_cloud_metadata: ~

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.