Filebeat: Fortinet Module not processing data as it should

I think I have a similar problem from this post but I can't solve with the solution on the post.

Here is my filebeat.yml:

# ============================== Filebeat modules ==============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: true

  # Period on which files under path should be checked for changes
  reload.period: 60s

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false


# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
name: xxx

# The tags of the shipper are included in their own field with each
# transaction published.
tags: ["xxx"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  host: "https://xxx"
  ssl.enabled: true
  ssl.certificate_authorities: ["/etc/filebeat/certs/ca.crt"]
  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["xxx"]

  # Protocol - either `http` (default) or `https`.
  protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  username: "filebeat_writer"
  password: "xxx"

# ------------------------------ Logstash Output -------------------------------
#output.logstash:
  # The Logstash hosts
#  hosts: ["xxx"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  ssl.certificate_authorities: ["/etc/filebeat/certs/ca.crt"]

  # Certificate for SSL client authentication
#  ssl.certificate: "/etc/logstash/certs/xxx.crt"

  # Client Certificate Key
#  ssl.key: "/etc/logstash/certs/xxx.key"

# ================================= Processors =================================

And here my fortinet.yml:

- module: fortinet
  firewall:
    enabled: true

    # Set which input to use between tcp, udp (default) or file.
    #var.input: udp

    # The interface to listen to syslog traffic. Defaults to
    # localhost. Set to 0.0.0.0 to bind to all available interfaces.
    var.syslog_host: xxx

    # The port to listen for syslog traffic. Defaults to 9004.
    var.syslog_port: xxx

    # Set internal interfaces. used to override parsed network.direction
    # based on a tagged interface. Both internal and external interfaces must be
    # set to leverage this functionality.
    #var.internal_interfaces: [ "LAN" ]

    # Set external interfaces. used to override parsed network.direction
    # based on a tagged interface. Both internal and external interfaces must be
    # set to leverage this functionality.
    #var.external_interfaces: [ "WAN" ]

This is the problem:

What I have to do for Filebeat parse the fields correctly?
Actually I receive syslog from 2 Fortigate FW.
I also tried to use Logstash, but when I added the seccond FW I have troubles parsing the fields. So I'm trying to use Fortignet module.

Did u run filebeat setup to setup the ingest pipeline? Can u post the logs from filebeat?

1 Like

Thank you for your kick reply!

Yes I do it using setup user.
Here is the logs:

2021-05-04T13:20:40.323-0300	INFO	instance/beat.go:660	Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2021-05-04T13:20:40.324-0300	INFO	instance/beat.go:668	Beat ID: d6830ea7-51e6-4131-9b6e-c09d1f7dfb4f
2021-05-04T13:20:40.328-0300	INFO	[beat]	instance/beat.go:996	Beat info	{"system_info": {"beat": {"path": {"config": "/etc/filebeat", "data": "/var/lib/filebeat", "home": "/usr/share/filebeat", "logs": "/var/log/filebeat"}, "type": "filebeat", "uuid": "d6830ea7-51e6-4131-9b6e-c09d1f7dfb4f"}}}
2021-05-04T13:20:40.328-0300	INFO	[beat]	instance/beat.go:1005	Build info	{"system_info": {"build": {"commit": "08e20483a651ea5ad60115f68ff0e53e6360573a", "libbeat": "7.12.0", "time": "2021-03-18T06:16:51.000Z", "version": "7.12.0"}}}
2021-05-04T13:20:40.328-0300	INFO	[beat]	instance/beat.go:1008	Go runtime info	{"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":8,"version":"go1.15.8"}}}
2021-05-04T13:20:40.329-0300	INFO	[beat]	instance/beat.go:1012	Host info	{"system_info": {"host": {"architecture":"x86_64","boot_time":"2021-04-08T11:50:15-03:00","containerized":false,"name":"xxx","ip":["127.0.0.1/8","::1/128","xxx/24","xxx/64"],"kernel_version":"3.10.0-1160.21.1.el7.x86_64","mac":["00:15:5d:64:4a:90"],"os":{"type":"linux","family":"redhat","platform":"centos","name":"CentOS Linux","version":"7 (Core)","major":7,"minor":9,"patch":2009,"codename":"Core"},"timezone":"-03","timezone_offset_sec":-10800,"id":"82b3bf2b70734010a21dd9xxx7ecada4b"}}}
2021-05-04T13:20:40.330-0300	INFO	[beat]	instance/beat.go:1041	Process info	{"system_info": {"process": {"capabilities": {"inheritable":null,"permitted":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend"],"effective":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend"],"bounding":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend"],"ambient":null}, "cwd": "/home/yyy/elastic-agent/elastic-agent-7.12.1-linux-x86_64", "exe": "/usr/share/filebeat/bin/filebeat", "name": "filebeat", "pid": 48948, "ppid": 16876, "seccomp": {"mode":"disabled","no_new_privs":false}, "start_time": "2021-05-04T13:20:39.490-0300"}}}
2021-05-04T13:20:40.330-0300	INFO	instance/beat.go:304	Setup Beat: filebeat; Version: 7.12.0
2021-05-04T13:20:40.330-0300	INFO	[index-management]	idxmgmt/std.go:184	Set output.elasticsearch.index to 'filebeat-7.12.0' as ILM is enabled.
2021-05-04T13:20:40.330-0300	WARN	[cfgwarn]	tlscommon/config.go:101	DEPRECATED: Treating the CommonName field on X.509 certificates as a host name when no Subject Alternative Names are present is going to be removed. Please update your certificates if needed. Will be removed in version: 8.0.0
2021-05-04T13:20:40.331-0300	INFO	eslegclient/connection.go:99	elasticsearch url: https://xxxxxx
2021-05-04T13:20:40.331-0300	INFO	eslegclient/connection.go:99	elasticsearch url: https://xxx
2021-05-04T13:20:40.331-0300	INFO	eslegclient/connection.go:99	elasticsearch url: https://xxxxxx
2021-05-04T13:20:40.331-0300	INFO	[publisher]	pipeline/module.go:113	Beat name: filebeat-logserver
2021-05-04T13:20:40.359-0300	INFO	beater/filebeat.go:117	Enabled modules/filesets: fortinet (clientendpoint, firewall, fortimail, fortimanager)
2021-05-04T13:20:40.387-0300	WARN	[cfgwarn]	tlscommon/config.go:101	DEPRECATED: Treating the CommonName field on X.509 certificates as a host name when no Subject Alternative Names are present is going to be removed. Please update your certificates if needed. Will be removed in version: 8.0.0
2021-05-04T13:20:40.387-0300	INFO	eslegclient/connection.go:99	elasticsearch url: https://xxxxxx
2021-05-04T13:20:40.387-0300	INFO	eslegclient/connection.go:99	elasticsearch url: https://xxx
2021-05-04T13:20:40.387-0300	INFO	eslegclient/connection.go:99	elasticsearch url: https://xxxxxx
2021-05-04T13:20:40.491-0300	INFO	[esclientleg]	eslegclient/connection.go:314	Attempting to connect to Elasticsearch version 7.12.0
2021-05-04T13:20:40.493-0300	WARN	[cfgwarn]	tlscommon/config.go:101	DEPRECATED: Treating the CommonName field on X.509 certificates as a host name when no Subject Alternative Names are present is going to be removed. Please update your certificates if needed. Will be removed in version: 8.0.0
2021-05-04T13:20:40.493-0300	INFO	eslegclient/connection.go:99	elasticsearch url: https://xxxxxx
2021-05-04T13:20:40.493-0300	INFO	eslegclient/connection.go:99	elasticsearch url: https://xxx
2021-05-04T13:20:40.493-0300	INFO	eslegclient/connection.go:99	elasticsearch url: https://xxxxxx
2021-05-04T13:20:40.506-0300	INFO	[esclientleg]	eslegclient/connection.go:314	Attempting to connect to Elasticsearch version 7.12.0
2021-05-04T13:20:40.674-0300	INFO	[modules]	fileset/pipelines.go:133	Elasticsearch pipeline loaded.	{"pipeline": "filebeat-7.12.0-fortinet-firewall-pipeline"}
2021-05-04T13:20:40.796-0300	INFO	[modules]	fileset/pipelines.go:133	Elasticsearch pipeline loaded.	{"pipeline": "filebeat-7.12.0-fortinet-firewall-event"}
2021-05-04T13:20:40.916-0300	INFO	[modules]	fileset/pipelines.go:133	Elasticsearch pipeline loaded.	{"pipeline": "filebeat-7.12.0-fortinet-firewall-utm"}
2021-05-04T13:20:41.143-0300	INFO	[modules]	fileset/pipelines.go:133	Elasticsearch pipeline loaded.	{"pipeline": "filebeat-7.12.0-fortinet-firewall-traffic"}
2021-05-04T13:20:41.144-0300	WARN	[cfgwarn]	tlscommon/config.go:101	DEPRECATED: Treating the CommonName field on X.509 certificates as a host name when no Subject Alternative Names are present is going to be removed. Please update your certificates if needed. Will be removed in version: 8.0.0
2021-05-04T13:20:41.144-0300	INFO	eslegclient/connection.go:99	elasticsearch url: https://xxxxxx
2021-05-04T13:20:41.144-0300	INFO	eslegclient/connection.go:99	elasticsearch url: https://xxx
2021-05-04T13:20:41.144-0300	INFO	eslegclient/connection.go:99	elasticsearch url: https://xxxxxx
2021-05-04T13:20:41.167-0300	INFO	[esclientleg]	eslegclient/connection.go:314	Attempting to connect to Elasticsearch version 7.12.0
2021-05-04T13:20:41.280-0300	INFO	[modules]	fileset/pipelines.go:133	Elasticsearch pipeline loaded.	{"pipeline": "filebeat-7.12.0-logstash-log-pipeline"}
2021-05-04T13:20:41.420-0300	INFO	[modules]	fileset/pipelines.go:133	Elasticsearch pipeline loaded.	{"pipeline": "filebeat-7.12.0-logstash-log-pipeline-plaintext"}
2021-05-04T13:20:41.516-0300	INFO	[modules]	fileset/pipelines.go:133	Elasticsearch pipeline loaded.	{"pipeline": "filebeat-7.12.0-logstash-log-pipeline-json"}
2021-05-04T13:20:41.657-0300	INFO	[modules]	fileset/pipelines.go:133	Elasticsearch pipeline loaded.	{"pipeline": "filebeat-7.12.0-logstash-slowlog-pipeline"}
2021-05-04T13:20:41.779-0300	INFO	[modules]	fileset/pipelines.go:133	Elasticsearch pipeline loaded.	{"pipeline": "filebeat-7.12.0-logstash-slowlog-pipeline-plaintext"}
2021-05-04T13:20:41.891-0300	INFO	[modules]	fileset/pipelines.go:133	Elasticsearch pipeline loaded.	{"pipeline": "filebeat-7.12.0-logstash-slowlog-pipeline-json"}
2021-05-04T13:20:41.892-0300	WARN	[cfgwarn]	tlscommon/config.go:101	DEPRECATED: Treating the CommonName field on X.509 certificates as a host name when no Subject Alternative Names are present is going to be removed. Please update your certificates if needed. Will be removed in version: 8.0.0
2021-05-04T13:20:41.892-0300	INFO	eslegclient/connection.go:99	elasticsearch url: https://xxxxxx
2021-05-04T13:20:41.892-0300	INFO	eslegclient/connection.go:99	elasticsearch url: https://xxx
2021-05-04T13:20:41.892-0300	INFO	eslegclient/connection.go:99	elasticsearch url: https://xxxxxx
2021-05-04T13:20:41.915-0300	INFO	[esclientleg]	eslegclient/connection.go:314	Attempting to connect to Elasticsearch version 7.12.0
2021-05-04T13:20:42.087-0300	INFO	[modules]	fileset/pipelines.go:133	Elasticsearch pipeline loaded.	{"pipeline": "filebeat-7.12.0-system-auth-pipeline"}
2021-05-04T13:20:42.205-0300	INFO	[modules]	fileset/pipelines.go:133	Elasticsearch pipeline loaded.	{"pipeline": "filebeat-7.12.0-system-syslog-pipeline"}
2021-05-04T13:20:42.208-0300	WARN	[cfgwarn]	tlscommon/config.go:101	DEPRECATED: Treating the CommonName field on X.509 certificates as a host name when no Subject Alternative Names are present is going to be removed. Please update your certificates if needed. Will be removed in version: 8.0.0
2021-05-04T13:20:42.208-0300	INFO	eslegclient/connection.go:99	elasticsearch url: https://xxxxxx
2021-05-04T13:20:42.208-0300	INFO	eslegclient/connection.go:99	elasticsearch url: https://xxx
2021-05-04T13:20:42.208-0300	INFO	eslegclient/connection.go:99	elasticsearch url: https://xxxxxx
2021-05-04T13:20:42.223-0300	INFO	[esclientleg]	eslegclient/connection.go:314	Attempting to connect to Elasticsearch version 7.12.0
2021-05-04T13:20:42.359-0300	INFO	[modules]	fileset/pipelines.go:133	Elasticsearch pipeline loaded.	{"pipeline": "filebeat-7.12.0-threatintel-abusemalware-pipeline"}
2021-05-04T13:20:42.490-0300	INFO	[modules]	fileset/pipelines.go:133	Elasticsearch pipeline loaded.	{"pipeline": "filebeat-7.12.0-threatintel-abuseurl-pipeline"}
2021-05-04T13:20:42.599-0300	INFO	[modules]	fileset/pipelines.go:133	Elasticsearch pipeline loaded.	{"pipeline": "filebeat-7.12.0-threatintel-anomali-pipeline"}
2021-05-04T13:20:42.752-0300	INFO	[modules]	fileset/pipelines.go:133	Elasticsearch pipeline loaded.	{"pipeline": "filebeat-7.12.0-threatintel-misp-pipeline"}
2021-05-04T13:20:42.866-0300	INFO	[modules]	fileset/pipelines.go:133	Elasticsearch pipeline loaded.	{"pipeline": "filebeat-7.12.0-threatintel-otx-pipeline"}
2021-05-04T13:20:42.866-0300	INFO	cfgfile/reload.go:262	Loading of config files completed.
2021-05-04T13:20:42.866-0300	INFO	[load]	cfgfile/list.go:129	Stopping 4 runners ...
2021-05-04T13:20:43.327-0300	INFO	[add_cloud_metadata]	add_cloud_metadata/add_cloud_metadata.go:101	add_cloud_metadata: hosting provider type not detected.
2021-05-04T13:20:47.925-0300	INFO	[modules]	fileset/pipelines.go:133	Elasticsearch pipeline loaded.	{"pipeline": "filebeat-7.12.0-fortinet-clientendpoint-pipeline"}
2021-05-04T13:20:48.098-0300	INFO	[modules]	fileset/pipelines.go:133	Elasticsearch pipeline loaded.	{"pipeline": "filebeat-7.12.0-fortinet-firewall-pipeline"}
2021-05-04T13:20:48.227-0300	INFO	[modules]	fileset/pipelines.go:133	Elasticsearch pipeline loaded.	{"pipeline": "filebeat-7.12.0-fortinet-firewall-event"}
2021-05-04T13:20:48.364-0300	INFO	[modules]	fileset/pipelines.go:133	Elasticsearch pipeline loaded.	{"pipeline": "filebeat-7.12.0-fortinet-firewall-utm"}
2021-05-04T13:20:48.493-0300	INFO	[modules]	fileset/pipelines.go:133	Elasticsearch pipeline loaded.	{"pipeline": "filebeat-7.12.0-fortinet-firewall-traffic"}
2021-05-04T13:20:49.205-0300	INFO	[modules]	fileset/pipelines.go:133	Elasticsearch pipeline loaded.	{"pipeline": "filebeat-7.12.0-fortinet-fortimail-pipeline"}
2021-05-04T13:20:49.782-0300	INFO	[modules]	fileset/pipelines.go:133	Elasticsearch pipeline loaded.	{"pipeline": "filebeat-7.12.0-fortinet-fortimanager-pipeline"}

So thats weird, its definitely using the pipeline as thats where the syslog5424_sd value it set but its not doing the next processor to split the key value pairs.

1 Like

Can this error be caused by 'csv on' on the FW syslog settings?
I will try this solution again. I think I forgot something on my last try...

ohh ya, i didn't see the commas, the processor is setup for space delimitated.

- kv:
    field: syslog5424_sd
    field_split: " (?=[a-z\\_\\-]+=)"
    value_split: "="
    prefix: "fortinet.tmp."
    ignore_missing: true
    ignore_failure: false
    trim_value: "\""
<188>date=2020-04-23 time=12:17:48 devname="testswitch1" devid="somerouterid" logid="0316013056" type="utm" subtype="webfilter" eventtype="ftgd_blk" level="warning" vd="root" eventtime=1587230269052907555 tz="-0500" policyid=100602 sessionid=1234 user="elasticuser" group="elasticgroup" authserver="elasticauth" srcip=192.168.2.1 srcport=61930 srcintf="port1" srcintfrole="lan" dstip=8.8.8.8 dstport=443 dstintf="wan1" dstintfrole="wan" proto=6 service="HTTPS" hostname="elastic.co" profile="elasticruleset" action="blocked" reqtype="direct" url="/config/" sentbyte=1152 rcvdbyte=1130 direction="outgoing" msg="URL belongs to a denied category in policy" method="domain" cat=76 catdesc="Internet Telephony"
1 Like

Sorry for my ignorance, i just started with the ELK stack.
The -kv filter you passed should I use it in the logstash .conf or the filebeat fortinet.yml module?
And the prefix should I use the same?

Its built into the module so as log as ur logs from fortinet are in the format shown above, the module and pre-made ingest pipelines loaded into ES will do all the work.

1 Like

Oh, right. So I must disable csv on, correct?

I don't know the inner workings of fortinet to know what setting provides the right format, but it sounds like a good start. Looks like it should be set to default, Fortinet module | Filebeat Reference [7.12] | Elastic

1 Like

Thanks again for your willingness to help!
I do not have write access on the firewall, I will ask my colleague to enable the default configuration of the logs and test again.

np, let me know if that works.

It's still not working properly :confused:
The best I have is working directly with Logstash and the following conf (that I took from this topic):

input {
	udp {
		port => 55514
		type => "forti_log"
		tags => ["FortiGateFW"]
	}
}

filter {
	#The Fortigate syslog contains a type field as well, we'll need to rename that field in order for this to work
	if [type] == "forti_log" {

		grok {
			match => ["message", "%{SYSLOG5424PRI:syslog_index}%{GREEDYDATA:message}"]
			overwrite => [ "message" ]
			tag_on_failure => [ "forti_grok_failure" ]
		}

    	kv {
			source => "message"
			value_split => "="
			#Expects you have csv enable set on your Fortigate. If not I think you'll have to change it to " " but I didn't test that.
			field_split => ","
		}

		mutate {
			#I want to use the timestamp inside the logs instead of Logstash's timestamp so we'll first create a new field containing the date and time fields from the syslog before we convert that to the @timestamp field
			add_field => { "temp_time" => "%{date} %{time}" }
			#The syslog contains a type field which messes with the Logstash type field so we have to rename it.
			rename => { "type" => "ftg_type" }
			rename => { "subtype" => "ftg_subtype" }
			add_field => { "type" => "forti_log" }
			convert => { "rcvdbyte" => "integer" }
			convert => { "sentbyte" => "integer" }
		}

		date {
			match => [ "temp_time", "yyyy-MM-dd HH:mm:ss" ]
			timezone => "UTC"
			target => "@timestamp"
		}

		mutate {
			#add/remove fields as you see fit.
			remove_field => ["syslog_index","syslog5424_pri","path","temp_time","service","date","time","sentpkt","rcvdpkt","log_id","message","poluuid"]
		}
	}
}

output {
	stdout { codec => rubydebug }
	if [type] == "forti_log" {
		elasticsearch {
			hosts => ["https://xxx"]
			http_compression => "true"
			index => "forti-%{+YYYY.MM.dd}"
			user => "logstash_writer"
			password => "xxx"
			cacert => "/etc/logstash/certs/ca.crt"
		}
	}
}

But when a field has a comma in the middle, it breaks erroneously. As you can see in the image below:
logstash

That way my dashboards don't have 100% accurate information. But it is the best I have achieved so far...

Can you tell me the difference between using the Filebeat + Fortinet module and using the Elastic Agent with the Fortinet integration?

I tried to use the Elastic Agent, but it gave some errors and I ended up giving up. From what I read it’s just a way to centralize the beat management. So I don't think it would make much difference to the Filebeat

If I have some way to solve this problem (breaking the fields with a comma where I shouldn't have) I will continue to parse directly with the logstash...

I'm looking at kv filters and I think the problem is solved when I use trim_value => ", \ "" to remove , and " from values.
I'll also try with remove_char_value => ",\ "" to look the difference on index fields.
ps: I used space before " here because when I put \ " together the topic editor hide the \

The Agent and Integration is the future of the Beats and modules. The end result of the data should be the same for the most part but the Agent and Integrations are in beta so they may not always be 100% at the moment.