Firewall logs to different Datastream by type

I am using the integration “Fortinet FortiGate Firewall Logs”.

To date it is working correctly, but I have been asked to ingest the UTM type logs in a different Datastream and I really have no idea how to do this using the integration.

I would appreciate any suggestion you can give me.

This is the configuration I currently have

Hi @juancamiloll What type of the logs are you ingesting today? event, traffic, login?
The integration should figure out the different types.

If the UTM is coming in on a different list address port etc just create another integration and add it to the put in the correct listen and port

Go to the Policy : LogstashPolicy and a new integration and put the correct listener Address and Port

I would name the integrations with good names like

fortinet_fortigate_event
fortinet_fortigate_utm

Hope that helps...

1 Like

@stephenb

Hello,

Thanks for replying, the logs I receive are TRAFFIC, UTM and event.

They all come on port 51403 TCP, I have no way to request a certain type of log on a different port.

As in this case I am not using a .conf from logstash but I am ingesting with the help of integration so I have no idea.

I am hoping that some advanced configuration of the integration will allow me to do this.

The Integration can figure that all out... they can all come on the same port

When the data comes in... the initial ingest pipeline will figure it all out... and parse all the data correctly ...

if you are saying that you "customers" want the data in different data stream I would ask why... as that could affect the capabilities

So a key question is WHY they want a different data stream what are they trying to accomplish, this integration is set up this way on purpose. What they are trying to accomplish can affect how you implement (and yes there will be a little work need)
Do they want to separate for Access Control, Different ILM Policies etc? ... or just because it "seems" like a good idea because data streams are new to them...

And yes with some work you can change the namespace (which is really NOT the intention) but that would put the different types in different but related data stream

There should be fields that will provide easy filtering and sorting etc.

So find out the Why / What they are trying to accomplish and you will probably need to create a custom ingest pipeline to do some sorting / routing to different namespace based on some fields... not trivial ... but not toooooo hard.

In the meantime I would probably read about Elastic Agent and data streams

1 Like

It is not possible at the moment, I have the same requirement and opened an issue about it a couple of months ago:

2 Likes

Well now that I know @leandrojmp has requested it... it is certainly legit! :slight_smile:

Did you have some code to share / solve / route?

Thank you both for your help.

So the only option would be using .conf from logstash and according to the documentation use a “processor” as “Reroute” ?

No, it is not that simple, it depends on a couple of things and the reroute processor may not work, it is blocked in the majority of the integrations as mentioned in the github issue linked.

How are you sending data? Is your Fortigate sending data directly to the Elastic Agent?

@leandrojmp

Did you test reroute with just changing the namespace I think that would work...

I don't think that's the best solution but I think that could work..

Yes I understand Really not the intention of namespace.

And are we sure that reroute will not work with the dataset name?

Seems like perhaps we have not tested for this integration

I do use reroute like you said on some of the more generic integrations...

I don't have any way to test because I don't have a harness for this.

You have the right people in that issue looking at it

As far as I know the reroute processor is blocked in the integrations because of how the permissions for the API Key used by the Elastic Agent works, this is mentioned in the linked github issue.

If you go into a Fleet Policy and in the Actions Menu select the option to View the Policy, you will have something like this for each integration:

output_permissions:
  97aad6d8-abc9-497c-8bf2-cb368540a96c:
    _elastic_agent_checks:
      cluster:
        - monitor
    67f570f1-e169-4731-a4bd-51a1e1817beb:
      indices:
        - names:
            - logs-system.application-servers
          privileges:
            - auto_configure
            - create_doc
        - names:
            - logs-system.security-servers
          privileges:
            - auto_configure
            - create_doc
        - names:
            - logs-system.system-servers
          privileges:
            - auto_configure
            - create_doc

In this case this is a system integration, and the namespace is called servers, so if this is the only integration in the policy, these are the permissions that the agent will have, everything is tied up to the namespace, so a reroute processor to change the namespace would fail because the API Key being used does not have the permission to write into it.

This applies to all integrations with the exception of some with wildcard permissions, like the AWS Custom Logs or Custom Kafka Logs (and some others), those integrations have permissions into logs-*-*, so they can use reroute to change the dataset and namespace, that's how we can have logs from AWS EKS being parsed by the Kubernetes integration for example.

So, in reality, it is not that is not possible for the user to create a custom datastream with a new dataset name for the Fortinet integration, it is just that it depends on what policies are present in the integration, this is like an unofficial workaround.

@leandrojmp Agree... And the right people are looking at it

BUT I think you CAN change the namespace (which again is not really the right solution) otherwise you could not do this ....


I have not kept up with all the is blocked / how etc... the last deep dive I did last year was based on API key sounds like that has / may have changed...

Those system integration / indices I think are a bit different....

Well @juancamiloll looks like you are stuck for a while :frowning:

I would put your comments in the Issue @leandrojmp linked above

You can change the namespace in the configuration of the integration, but you cannot use a reroute processor in a custom ingest pipeline to change it.

The permissions would be created after the integration is saved/created.

If you create the integration and set the namespace as prod, you cannot have a reroute processor in the custom pipeline changing the namespace to production for example, you would need to edit it in the integration and change from prod into production, so a new API Key is created with the right permissions.

The main issue here is that it is pretty common to have different requirements regarding data retention for firewall logs, you may be required to keep the system logs, which are the logs generated by the firewall device for a longer time than the traffic logs.

Currently with an policy with just the Fortinet integration this is not possible because everything is stored in the same data stream and you have no option to split it in different data streams.

It can be done, but it requires a couple of work and some workarounds, none of them are documented or are official.

I'm planning to work on this next week, I can share my workaround on the Github issue until the integration is updated to have multiple data sets.

I just did ... right this minute :slight_smile:

Elastic 8.17.4
Fortinet FortiGate Firewall Logs 1.31.0

This is my ingest pipeline

GET _ingest/pipeline/logs-fortinet_fortigate.log@custom
{
  "logs-fortinet_fortigate.log@custom": {
    "processors": [
      {
        "set": {
          "field": "data_stream.namespace",
          "value": "newnamespace"
        }
      },
      {
        "reroute": {}
      }
    ]
  }
}

Before / After

Before Pipeline

{
  "_index": ".ds-logs-fortinet_fortigate.log-default-2025.04.16-000001",
  "_id": "VTBpQJYBRkty1fMz3h7y",
  "_version": 1,
  "_source": {
    "agent": {
      "name": "stephenb-logginig-test",
      "id": "438a8475-1a44-4006-81e0-ea28f9b1e8a1",
      "type": "filebeat",
      "ephemeral_id": "27a11684-2ce9-4429-8d38-47dd85ed38c5",
      "version": "8.17.4"
    },
    "log": {
      "file": {
        "path": "/home/azureuser/fortigate/fortigage.log"
      },
....
    "related": {
      "ip": [
        "10.1.100.66",
        "89.160.20.128",
        "172.16.200.11"
      ]
    },
    "data_stream": {
      "namespace": "default",
      "type": "logs",
      "dataset": "fortinet_fortigate.log"
    },

After

{
  "_index": ".ds-logs-fortinet_fortigate.log-newnamespace-2025.04.16-000001",
  "_id": "NRxsQJYBq5b1-O15mZd_",
  "_version": 1,
  "_source": {
    "agent": {
      "name": "stephenb-logginig-test",
      "id": "438a8475-1a44-4006-81e0-ea28f9b1e8a1",
      "ephemeral_id": "27a11684-2ce9-4429-8d38-47dd85ed38c5",
      "type": "filebeat",
      "version": "8.17.4"
    },
    "log": {
      "file": {
        "path": "/home/azureuser/fortigate/fortigage1.log"
      },
      "offset": 24446,
    .....
      ]
    },
    "data_stream": {
      "namespace": "newnamespace",
      "type": "logs",
      "dataset": "fortinet_fortigate.log"
    },

:slight_smile:

Now the routing would need to be more specific...

Like I said I think this is actually API Key based

So now if there are some fields we can key on we can route to namespace...

1 Like

And now I made it write to a new data_stream.data set :smiley:

PUT _ingest/pipeline/logs-fortinet_fortigate.log@custom
{
  "processors": [
    {
      "set": {
        "field": "data_stream.namespace",
        "value": "newnamespace"
      }
    },
    {
      "set": {
        "field": "data_stream.dataset",
        "value": "fortinet_fortigate.log_custom"
      }
    },
    {
      "reroute": {}
    }
  ]
}

I have not done anything really special...

Weird, I haven't tried because it is a production system that I cannot keep doing those kind of tests, but it was mentioned in the Github issue that the reroute process would not work.

This Policy just have the Fortinet integration?

No there is an azure event hub system and fortigate... all really vanilla ...

but I AM using one of the Catch All Input Event Hub one of the Azure Custom Logs .. maybe that I why I am getting the leniency.

If I get a chance let me run it ... only... I will report back... that may be why....

Yeah, that's probably it, this Custom Azure Logs integration seems to have permissions on logs-*-*, so the API Key used by this agent would end-up with this permission as well.

This is the workaround I mentioned, add a integration with broader permissions to the same policy.

I will do a quick check it will be good to know for future
Where do you see the permissions defined?

If you go into the policy page, in the Action button click on View Policy, it will list the policy, the integrations and the permissions for each integration.

The documentation for this integration already mentions that you can set any dataset and namespace you want.