[BIG-IP ASM] Could not index event to Elasticsearch

Hi There,

I try to put some F5 BIG-IP ASM logs in ELK.
I configured a logstash syslog conf file and tried to , but I get an error during indexation step:

Jan 21 17:12:17 cnclelk12 logstash[162998]: [2025-01-21T17:12:17,975][WARN ][logstash.outputs.elasticsearch][main][81f157ba1a63372723ee89696130be77c8aa18230b0ffb591ee81b2ea5714a41] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logs-waf-dcb", :routing=>nil}, {"response_code"=>"200", "violation_rating"=>"0", "source_host"=>"10.251.146.115", "event"=>{"original"=>"<134>Jan 21 17:12:17 BIG-IP.localdomain ASM:unit_hostname=\"BIG-IP.localdomain\",management_ip_address=\"10.10.10.10\",management_ip_address_2=\"N/A\",http_class_name=\"/Common/WAF_Policy\",web_application_name=\"/Common/WAF_Policy\",policy_name=\"/Common/WAF_Policy\",policy_apply_date=\"2024-12-11 18:21:03\",violations=\"N/A\",support_id=\"5397052090223779808\",request_status=\"passed\",response_code=\"200\",ip_client=\"10.251.146.115\",route_domain=\"0\",method=\"POST\",protocol=\"HTTPS\",query_string=\"~RG_WEBGUI=X&sap-statistics=true\",x_forwarded_for_header_value=\"10.251.146.115\",sig_ids=\"N/A\",sig_names=\"N/A\",date_time=\"2025-01-21 17:12:16\",severity=\"Informational\",attack_type=\"N/A\",geo_location=\"N/A\",ip_address_intelligence=\"N/A\",username=\"N/A\",session_id=\"1b33f1f10f8d8a55\",src_port=\"53381\",dest_port=\"443\",dest_ip=\"10.73.1.1\",sub_violations=\"N/A\",virus_name=\"N/A\",violation_rating=\"0\",websocket_direction=\"N/A\",websocket_message_type=\"N/A\",device_id=\"N/A\",staged_sig_ids=\"N/A\",staged_sig_names=\"N/A\",threat_campaign_names=\"N/A\",staged_threat_campaign_names=\"N/A\",blocking_exception_reason=\"N/A\",captcha_result=\"not_received\",microservice=\"N/A\",tap_event_id=\"N/A\",tap_vid=\"N/A\",vs_name=\"/Common/VIP_recette-service.my.domain\",sig_cves=\"N/A\",staged_sig_cves=\"N/A\",uri=\"/sap(cz1TSUQlM2FBTk9OJTNhY25zcm0tc2lyX1JFMl8wNCUzYXNSbks1eDZrMzhlY0x6UXN5LWxDWS1OSFowSjRuQ3o4YWVwemRPUEstQVRU)/bc/gui/sap/its/webgui/batch/json\",fragment=\"N/A\",request=\"POST /sap(cz1TSUQlM2FBTk9OJTNhY25zcm0tc2lyX1JFMl8wNCUzYXNSbks1eDZrMzhlY0x6UXN5LWxDWS1OSFowSjRuQ3o4YWVwemRPUEstQVRU)/bc/gui/sap/its/webgui/batch/json?~RG_WEBGUI=X&sap-statistics=true HTTP/1.1\\r\\nHost: recette-service.my.domain\\r\\nConnection: keep-alive\\r\\nContent-Length: 54\\r\\nsec-ch-ua-platform: %22Windows%22\\r\\nsap-cancel-on-close: true\\r\\nmoin: null\\r\\nsec-ch-ua: %22Not A(Brand%22;v=%228%22, %22Chromium%22;v=%22132%22, %22Google Chrome%22;v=%22132%22\\r\\nsec-ch-ua-mobile: ?0\\r\\nSAP-Perf-FESRec-opt: PA30,M0:37::btn[5]_Press,,cr_132,1310,49578,,,131,X,,,,,,2,2,,20250121161214938,PA30\\r\\nSAP-Perf-FESRec: F00ED0181E474F44B498DC135D336AFC,55DDE5490A8BA728C133146221000016,0,408,539,1,M0:37::btn[5]_Press_3,284,408,win_10,SAP_ITS\\r\\nSAP-PASSPORT: 2A54482A0300E60000504133305F5341504D503530412020202020202020202020202020202020202000005341505F4532455F54415F55736572202020202020202020202020202020202073637265656E61726561322E42345F50726573735F342020202020202020202020202020202020200005504133305F5341504D503530412020202020202020202020202020202020202035354444453534393041384241373238433133333134363232313030303031372020200007F00ED0181E474F44B498DC135D336AFC0000000000000000000000000000000000000000000000E22A54482A\\r\\nAccept: multipart/mixed\\r\\nContent-Type: application/json;charset=UTF-8\\r\\nUser-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/132.0.0.0 Safari/537.36\\r\\nOrigin: https://recette-service.my.domain\\r\\nSec-Fetch-Site: same-origin\\r\\nSec-Fetch-Mode: cors\\r\\nSec-Fetch-Dest: empty\\r\\nReferer: https://recette-service.my.domain/sap(ZT1YbUNrWUZpcG1YZkx4ZVpVRzgyc1RRLS1lSnZ0bDhDNFIxTGN5N2lKNDJkeGJBLS0=)/bc/gui/sap/its/webgui\\r\\nAccept-Encoding: gzip, deflate, br, zstd\\r\\nAccept-Language: fr-FR,fr;q=0.9,en-US;q=0.8,en;q=0.7\\r\\nCookie: __sirh_redirection_state=checked; __prd_saml_idp=aHR0cHM6Ly9qYW51cy5jbnJzLmZyL2lkcA%3D%3D; __prd_redirection_state=checked; ToolbarOkCodeVisible=0; __prd_saml_sp=aHR0cHM6Ly9vcGVyYS5kc2kuY25ycy5mci9zaW1wbGVzYW1sL21vZHVsZS5waHAvc2FtbC9zcC9tZXRhZGF0YS5waHAvb2hyaXM%3D; _ga_Z4JTCJLMT2=GS1.1.1736345222.3.1.1736345426.0.0.0; __sirh_saml_idp=aHR0cHM6Ly9qYW51cy5jbnJzLmZyL2lkcA%3D%3D+aHR0cHM6Ly9qYW51cy1yZWMuY25ycy5mci9pZHAvc2hpYmJvbGV0aA%3D%3D; __sirh_saml_sp=aHR0cHM6Ly9yZWNldHRlLXNpcmguZHNpLmNucnMuZnIvc3A%3D; SLB=2256619786.20480.0000; TS01717120028=0147da922eaf5fd2e9e39a45163fffaff590e859fc3dcc65b944518bc19d282853909ed2173bb747357ab3d798cfb0920ae09bab3b; __sirh_redirect_user_idp=https%3A%2F%2Fidp-rec.my.domain%2Fidp%2Fsidp; _shibsession_726563657474652d7369726868747470733a2f2f726563657474652d736972682e6473692e636e72732e66722f7370=_112baa2e738b08338e4459928fc888e5; PortalAlias=portal; saplb_*=(J2EE202790320)202790350; JSESSIONID=-lwa-tdc_pbMwf3NZ4708XpZ-6GJlAHOVRYM_SAPykD3UMWKFxEgKzcWg78k8ANg; JSESSIONMARKID=NElucAGV6qLYRQdC959tK0uhYaUpT7iaMiVs5VFgw; SAPWP_active=1; TS018914b4=01fb69fb221474f9bf8545bc3796e0056daf5e862a98b575a5527f891e4b9d7c11a5c0a4f503b9317cec98c802c4d95969c18f5d7c978d70f281d7f2de342268b5e452b5db821f91facda789e11ab8c80f98708974359a0c72964da84ab8d96a65f9603221a05bfbdcc3053f99ecda2923b498fed6cd7aa456186a7e6b608375ff7eab3e051c4eb64221d9c3f85e6ea9201be97c9d; saplbRE2=cnsrm-sir_RE2_04; sap-usercontext=sap-language=FR&sap-client=500; sap-ext-sid-backup=dmVyc2lvbj0xLjAsdGltZXN0YW1wPTE3Mzc0NzU4MDAsUkUyIURJQUcqPWNuc3JtLXNpcl9SRTJfMDQ=; SAP_SESSIONID_RE2_500=k2gZdCsof9aSyt9u4JKNFF0LDLXYEhHvuOgAUFaLf9w%3d; TS01717120=01fb69fb22a675c5d8992b6c89254d02cc04d5a07198b575a5527f891e4b9d7c11a5c0a4f5ef38f4c506af34d8cab0fd068a4cd95b4ef573470dd7baddf45d9765343eade249a33a12bcbfef593eb4602c8dcea38eb95d1531a1a64e911b81561682a24f4026a9db8629cb3f8c02ab6774f3cfe7ef715678fc02862513e43a0c19dbebd9c68c109de76b1ac6b8670eda1454e2edc87c6cbe91fe01aabcf2b0aa8aa731d2f41b9af4db7376137e718643a82ca59e1a40827ceb48cbf25a2446ebac77b630deffaad3a44062eead11a055041db02662\\r\\nX-Forwarded-For: 10.251.146.115\\r\\n\\r\\n[{%22post%22:%22action/11/wnd[2]/shell%22},{%22get%22:%22state/ur%22}]\",response=\"Only illegal requests are logged\"\r\n"}, "method"=>"POST", "sig_ids"=>["N/A"], "blocking_exception_reason"=>"N/A", "source_geo"=>{}, "sig_names"=>["N/A"], "staged_threat_campaign_names"=>["N/A"], "support_id"=>"5397052090223779808", "tags"=>["_geoip_lookup_failure"], "service"=>{"type"=>"system"}, "@version"=>"1", "protocol"=>"HTTPS", "uri"=>"/sap(cz1TSUQlM2FBTk9OJTNhY25zcm0tc2lyX1JFMl8wNCUzYXNSbks1eDZrMzhlY0x6UXN5LWxDWS1OSFowSjRuQ3o4YWVwemRPUEstQVRU)/bc/gui/sap/its/webgui/batch/json", "violations"=>["N/A"], "attack_type"=>["N/A"], "request"=>"POST /sap(cz1TSUQlM2FBTk9OJTNhY25zcm0tc2lyX1JFMl8wNCUzYXNSbks1eDZrMzhlY0x6UXN5LWxDWS1OSFowSjRuQ3o4YWVwemRPUEstQVRU)/bc/gui/sap/its/webgui/batch/json?~RG_WEBGUI=X&sap-statistics=true HTTP/1.1\\r\\nHost: recette-service.my.domain\\r\\nConnection: keep-alive\\r\\nContent-Length: 54\\r\\nsec-ch-ua-platform: %22Windows%22\\r\\nsap-cancel-on-close: true\\r\\nmoin: null\\r\\nsec-ch-ua: %22Not A(Brand%22;v=%228%22, %22Chromium%22;v=%22132%22, %22Google Chrome%22;v=%22132%22\\r\\nsec-ch-ua-mobile: ?0\\r\\nSAP-Perf-FESRec-opt: PA30,M0:37::btn[5]_Press,,cr_132,1310,49578,,,131,X,,,,,,2,2,,20250121161214938,PA30\\r\\nSAP-Perf-FESRec: F00ED0181E474F44B498DC135D336AFC,55DDE5490A8BA728C133146221000016,0,408,539,1,M0:37::btn[5]_Press_3,284,408,win_10,SAP_ITS\\r\\nSAP-PASSPORT: 2A54482A0300E60000504133305F5341504D503530412020202020202020202020202020202020202000005341505F4532455F54415F55736572202020202020202020202020202020202073637265656E61726561322E42345F50726573735F342020202020202020202020202020202020200005504133305F5341504D503530412020202020202020202020202020202020202035354444453534393041384241373238433133333134363232313030303031372020200007F00ED0181E474F44B498DC135D336AFC0000000000000000000000000000000000000000000000E22A54482A\\r\\nAccept: multipart/mixed\\r\\nContent-Type: application/json;charset=UTF-8\\r\\nUser-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/132.0.0.0 Safari/537.36\\r\\nOrigin: https://recette-service.my.domain\\r\\nSec-Fetch-Site: same-origin\\r\\nSec-Fetch

Here is my syslog.conf file:

input {
  syslog {
    port => 5140
    codec => plain {
    }
  }
}
filter {
  grok {
    match => {
      "message" => [
        ",attack_type=\"%{DATA:attack_type}\"",
        ",blocking_exception_reason=\"%{DATA:blocking_exception_reason}\"",
        ",bot_anomalies=\"%{DATA:bot_anomalies}\"",
        ",bot_category=\"%{DATA:bot_category}\"",
        ",bot_signature_name=\"%{DATA:bot_signature_name}\"",
        ",client_application=\"%{DATA:client_application}\"",
        ",client_application_version=\"%{DATA:client_application_version}\"",
        ",client_class=\"%{DATA:client_class}\"",
        ",date_time=\"%{DATA:date_time}\"",
        ",dest_port=\"%{DATA:dest_port}\"",
        ",enforced_bot_anomalies=\"%{DATA:enforced_bot_anomalies}\"",
        ",grpc_method=\"%{DATA:grpc_method}\"",
        ",grpc_service=\"%{DATA:grpc_service}\"",
        ",ip_client=\"%{DATA:ip_client}\"",
        ",is_truncated=\"%{DATA:is_truncated}\"",
        ",method=\"%{DATA:method}\"",
        ",outcome=\"%{DATA:outcome}\"",
        ",outcome_reason=\"%{DATA:outcome_reason}\"",
        ",policy_name=\"%{DATA:policy_name}\"",
        ",protocol=\"%{DATA:protocol}\"",
        ",request_status=\"%{DATA:request_status}\"",
        ",request=\"%{DATA:request}\"",
        ",request_body_base64=\"%{DATA:request_body_base64}\"",
        ",response_code=\"%{DATA:response_code}\"",
        ",severity=\"%{DATA:severity}\"",
        ",sig_cves=\"%{DATA:sig_cves}\"",
        ",sig_ids=\"%{DATA:sig_ids}\"",
        ",sig_names=\"%{DATA:sig_names}\"",
        ",sig_set_names=\"%{DATA:sig_set_names}\"",
        ",src_port=\"%{DATA:src_port}\"",
        ",staged_sig_cves=\"%{DATA:staged_sig_cves}\"",
        ",staged_sig_ids=\"%{DATA:staged_sig_ids}\"",
        ",staged_sig_names=\"%{DATA:staged_sig_names}\"",
        ",staged_threat_campaign_names=\"%{DATA:staged_threat_campaign_names}\"",
        ",sub_violations=\"%{DATA:sub_violations}\"",
        ",support_id=\"%{DATA:support_id}\"",
        ",threat_campaign_names=\"%{DATA:threat_campaign_names}\"",
        ",unit_hostname=\"%{DATA:unit_hostname}\"",
        ",uri=\"%{DATA:uri}\"",
        ",violations=\"%{DATA:violations}\"",
        ",violation_details=\"%{DATA:violation_details_xml}\"",
        ",violation_rating=\"%{DATA:violation_rating}\"",
        ",vs_name=\"%{DATA:vs_name}\"",
        ",x_forwarded_for_header_value=\"%{DATA:x_forwarded_for_header_value}\""
      ]
    }
    break_on_match => false
  }
  if [violation_details_xml] != "N/A" {
    xml {
      source => "violation_details_xml"
      target => "violation_details"
    }
  }
  mutate {
    split => { "attack_type" => "," }
    split => { "sig_cves" => "," }
    split => { "sig_ids" => "," }
    split => { "sig_names" => "," }
    split => { "sig_set_names" => "," }
    split => { "staged_sig_cves" => "," }
    split => { "staged_sig_ids" => "," }
    split => { "staged_sig_names" => "," }
    split => { "staged_threat_campaign_names" => "," }
    split => { "sub_violations" => "," }
    split => { "threat_campaign_names" => "," }
    split => { "violations" => "," }
    remove_field => [
      "[violation_details][violation_masks]",
      "violation_details_xml",
      "message"
    ]
  }
  if [x_forwarded_for_header_value] != "N/A" {
    mutate { add_field => { "source_host" => "%{x_forwarded_for_header_value}"}}
  } else {
    mutate { add_field => { "source_host" => "%{ip_client}"}}
  }
  geoip {
    source => "source_host"
    target => "source_geo"
  }
  ruby {
      code => "
          require 'base64';

          data = event.get('[violation_details]');

          def check64(value)
            value.is_a?(String) && Base64.strict_encode64(Base64.decode64(value)) == value;
          end

          def iterate(key, i, event)
            if i.is_a?(Hash)
              i.each do |k, v|
                if v.is_a?(Hash) || v.is_a?(Array)
                  newkey = key + '[' + k + ']';
                  iterate(newkey, v, event)
                end
              end
            else if i.is_a?(Array)
              i.each do |v|
                    iterate(key, v, event)
              end
            else
              if check64(i)
                event.set(key, Base64.decode64(i))
              end
            end
          end
          end
          iterate('[violation_details_b64decoded]', data, event)
      "
    }
}
output {
  elasticsearch {
    hosts => ["http://server_ip:9200"]
    user => "elastic"
    password => "some_password"
    index => "logs-waf-dcb"
  }
}

Could you provide some help ?

What version are you on?

Was that the entire message at the end after it displays the data often there is a reason.

Try adding this...

output {
  elasticsearch {
    hosts => ["http://server_ip:9200"]
    user => "elastic"
    password => "some_password"
    index => "logs-waf-dcb"
    action => "create" <<< You are probably writing to a data stream, only support create action type
  }
}

Thank you for this answer.
It is more better now, since I got logs in the Observability Explorer.

However I have no index displayed in kibana. Only a data stream.
Any idea about this ?

A data stream is a collection of indexes... It is a construct to support time based indices. It is what you want for this kind of data

When you say you have no indexes, what are you trying to do?

Go to Kibana Dev Tools and Run the following and you should see the indices

GET _cat/indices/*logs*/?v

Exactly What Version are you on?
Exactly what are you trying to do?

If you are using version 8.X of the stack, everything that starts with a logs-* in the index name will match a built-in template and will be created as a data stream.

Since you are not using an Elastic Agent integration, but Logstash, I suggest that you change the name of the index to anything else, avoid using logs-* for anything that is custom and not collected by Elastic Agent.

This will help you avoid a lot of problemas and complications when trying to customize anything.

For example, change your index configuration to something like this:

index => "bigip-waf-dcb"

Hmmmmm funny @leandrojmp you I and usually agree ... on this we see it slightly different ... in the end either is valid

I see this as a good thing :slight_smile: , but then again, I am very datastream-centric.
With the logs-* you will get a lot of out-of-the-box benefits
Data Stream Framework
Good ECS Template
@custom framework to adjust mappings, ILM and ingest pipelines

If a user is 7.x Index-centric, completely managing templates ILM etc... that is fine

so, @Plauda, You have a choice for your approach... either is equally valid... but Data Streams are more of the Future .... (sorry one last point :0 )

1 Like

Wow, great informations there.
I found today that managed indexes could be painful and I like when things are easy :slight_smile:

Exactly What Version are you on?
Exactly what are you trying to do?

I'm using version 8.17.
I have to store my logs in elastic in order to use them for reporting.

For now, my output config is:

output {
  elasticsearch {
    hosts => ["http://elasticserver:9200"]
    user => "some_user"
    password => "some_password"
    index => "logs-waf-dcb"
    action => "create"
  }
}

The action => "create" line made my logs appear in kibana.

When I go in kibana > index management > Data Streams, I can see a data stream named logs-waf-dcb.
When clicking on the name I got this:


So I have an default index, logs.
I don't like to see the Effective data retention disabled.

When looking on ILM retention policy, I can see many things that I don't like:

  • This is a managed policy, that I should not change
  • I only have a hot phase and a 30 days retention

How could I avoid this and get a fully custom index ? Is it really necessary ?

Since you are not using an Elastic Agent integration, but Logstash, I suggest that you change the name of the index to anything else, avoid using logs-* for anything that is custom and not collected by Elastic Agent.

Could be fine for now, but please note we will have to add agents in the future, so is it a good choice in this context ?

so, @Plauda, You have a choice for your approach... either is equally valid... but Data Streams are more of the Future .... (sorry one last point :0 )

By now I got my first dashboard created, so I will probably prefer to keep my data untouched, partly depending on @leandrojmp answer.

The most important at this point is to be able to keep my data for a long time, ie 1 month hot data, then 3 month cold, then 8 month frozen.

See here how to customize ILM for a data stream...

Basically Clone the existing Policy and Edit to your liking and give it a name... like logs-custom

Add a custom template you can do through the UI or this is the whole request in Kibana - Dev Tools

PUT _component_template/logs@custom
{
  "template": {
    "settings": {
      "index": {
        "lifecycle": {
          "name": "logs-custom"
        }
      }
    }
  }
}

If you are going to use agents... you must use data streams

If you want to you indices for some other data fine...

BUT you will then have a mixed approach and you you will still need to define rollover and ILM etc for your non-datastream data

Thank you for these information.
I will have a look on the Customize built-in ILM policies and let you know.

Yes we will have a target mixed approach :

  • syslog for several network appliances
  • beats agents for servers

I'm running a production poc for now in order to address an urgent need. I only use one logstash + elasticsearch node and one kibana node.

Our target solution will use cluster, something like:

  • 2 logstash nodes
  • 3 elasticsearch nodes
  • 1 kibana node (+ fleet)

I followed the link and created:

  • a custom ILM policy named logs-custom
  • a new index template named logs@custom

Now I got my data stream mapped with the new ILM policy.

So I suppose the ILM will now apply.
Is there anything else to do with hot/warm nodes definition ?

2 more thing :

  • I can see a Yellow Health Status due to index Warning State


    What could cause this Health status and how could I correct it ?

  • I saw that the component template offers mapping ability.
    Is it supposed to offer uniformization capabilities for different sort of logs, or something else ?
    Could you help to understand the purpose of this mapping ?

Thanks again for your time.

The main issue is that by using data streams starting with logs-* for custom data that is not being collected by Elastic Agent increases the number things that you need to do to make sure that any custom change will not break other things.

For example, you need to create a template that will match only your custom log and set this with a higher priority than the default template, also depending on the name you choose there is a possiblity that a current or new integration can also match this template and someone with access to your cluster may install this integration, and you would have some issues because it is using the wrong template, this is not common but can happen.

I prefer to have the naming schemes completely different to avoid any risks of a custom change impacting things that are being collected with Elastic Agent.

Also, when I started using Elastic Agent, customizations were way more complicated, there was no @custom templates for example.

Keep in mind that changes in logs@custom will be applied to all logs datastreams, so if you start using an Elastic Agent integration, it will also get the changes in logs@custom.

How many nodes you have? If you have just one node this is expected as a replica cannot be allocated and the index will be in a yellow state. You would need to edit the settings/template to remove the replicas.

Mappings are a core concept of Elasticsearch, it is how the fields are defined, which data types they will use and this can impact in your searchs, performance and disk usage.

An index template is used to apply the mappings, settings and aliases to an index when it is creating, without the need for the user to do that manually, component templates are reusable building blocks that helps you to avoid having to write the same mappings multiple times and also to customize some mappings/settings depending on the indices.

I recommend that you check this documentation about mappings and this one about index templates.

As far as I know, all data will have the same retention policy, so it will be fine.
If we should change something about this, we would have to use an index named differently from logs-xxx. It could be something interesting to try with others data, thank you for this information.

We have only one by now.
So I understand this health state is normal. We should get a green one with more nodes.

Mapping and index templates will be a further step.

Thank you very much for your help @leandrojmp and @stephenb