nginx lua log ingestion with fluent

I have nginx applications with lua scripts in the log output that have lua statements. I also have pure nginx and .net deployments.
Fluent sends the serilog and nginx logs but not the lua outputs. I have the following fluent configuration:

fluent-bit.conf: |
    [SERVICE]
        Flush         5
        Log_Level     info
        Parsers_File  parsers.conf
        Daemon        off

    @INCLUDE input-kubernetes.conf
    @INCLUDE output-elasticsearch.conf
  input-kubernetes.conf: |
    [INPUT]
        Name              tail
        Path              /var/log/containers/*.log
        Exclude_Path      /var/log/containers/*_kube-system_*
        Tag               <container_name>-<namespace_name>-aks
        Tag_Regex         (?<pod_name>[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)_(?<namespace_name>[^_]+)_(?<container_name>.+)-
        Parser            cri
        DB                /var/log/flb_kube.db
        Mem_Buf_Limit     10MB
        Skip_Long_Lines   On
        Refresh_Interval  5

    [FILTER]
        Name parser
        Match *proxy
        Key_Name log
        Parser nginx_lua
        Reserve_Data True

    [FILTER]
        Name parser
        Match *nginx*
        Key_Name log
        Parser nginx_access

    [FILTER]
        Name grep
        Match *
        Exclude tag .*proxy.*

    [FILTER]
        Name parser
        Match *
        Key_Name log
        Parser serilog

    [FILTER]
        Name grep
        Match *dotnet*
        Regex $level .

    [FILTER]
        Name lua
        Match *
        call append_tag
        code function append_tag(tag, timestamp, record) new_record = record new_record["tag"] = tag return 1, timestamp, new_record end
  output-elasticsearch.conf: |
    [OUTPUT]
        Name es
        Match *
        tls On
        tls.verify Off
        Retry_Limit False
        Replace_Dots On
        Suppress_Type_Name On
        Host ${FLUENT_ELASTICSEARCH_HOST}
        Port ${FLUENT_ELASTICSEARCH_PORT}
        HTTP_User ${FLUENT_ELASTICSEARCH_USER}
        HTTP_Passwd ${FLUENT_ELASTICSEARCH_PASSWORD}
        Logstash_Format On
        Logstash_Prefix_Key $tag
        Logstash_DateFormat %Y.%m
        Trace_Error On
  parsers.conf: |
    [PARSER]
        Name serilog
        Format json
        Time_Key @timestamp
        Time_Format %Y-%m-%dT%H:%M:%S.%L
        Time_Keep On

    [PARSER]
        Name cri
        Format regex
        Regex ^(?<time>[^ ]+) (stdout|stderr) ([^ ]*) (?<log>.*)$
        Time_Key    time
        Time_Format %Y-%m-%dT%H:%M:%S.%L%z

    [PARSER]
        Name nginx_access
        Format regex
        Regex ^(?<remote>[^ ]*) (?<host>[^ ]*) (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+) (?<path>[^\"]*) HTTP\/\S+" (?<code>[^ ]*) (?<size>[^ ]*) "(?<referer>[^\"]*)" "(?<agent>[^\"]*)"
        Time_Key time
        Time_Format %d/%b/%Y:%H:%M:%S %z

    [PARSER]
        Name nginx_lua
        Format regex
        Regex  ^(?<time>\d{4}\/\d{2}\/\d{2} \d{2}:\d{2}:\d{2}) \[\] (?<worker>\d+#\d+): \*(?<request_id>\d+) \[lua\] (?<conf_file>[^\:]+):(?<line_number>\d+)\):(?<extra_info>[^\:]+): (?<log_message>[^,]+), client: (?<client_ip>[^\ ]+), server: (?<server>[^\ ]*), request: "(?<method>\S+) (?<path>[^\"]*) HTTP\/(?<http_version>[^\"]+)", host: "(?<host>[^\"]*)"
        Time_Key time
        Time_Format %Y/%m/%d %H:%M:%S

Hello and welcome,

It is not clear if your issue is with any Elastic tool or not.

Fluentd is not made by Elastic, so it is not supported here, you need to check for a Fluentd forum.

Do you have any error or issues in any Elastic Tool?