Filebeat is running on two different systems with two different modules but no documents appear in filebeat indx

We have a filebeat with the panw module enabled collecting data from Palo Alto Firewalls.On a separate system, we have filebeat running with the system module enabled. The data is being received by elastic search but there are no documents. I am using the panw beat as an example here, but the same issue is occurring on both systems. We have confirmed with a tcpdump the data is arriving from the firewall to the beat. I enabled DEBUG logging in Filebeat and confirmed events were published:

      Feb 06 12:13:01 systemname filebeat[2425]: 2020-02-06T12:13:01.701-0500        DEBUG        [processors]        processing/processors.go:186        Publish event: {
        (snip)
        Feb 06 12:13:02 systemname filebeat[2425]: 2020-02-06T12:13:02.423-0500        DEBUG        [elasticsearch]        elasticsearch/client.go:348        PublishEvents: 2 events have been published to elasticsearch in 
        Feb 06 12:13:02 systemname filebeat[2425]: 2020-02-06T12:13:02.423-0500        DEBUG        [publisher]        memqueue/ackloop.go:160        ackloop: receive ack [20: 0, 2]
        Feb 06 12:13:02 systemname filebeat[2425]: 2020-02-06T12:13:02.423-0500        DEBUG        [publisher]        memqueue/eventloop.go:535        broker ACK events: count=2, start-seq=44, end-seq=45
        Feb 06 12:13:02 systemname filebeat[2425]: 2020-02-06T12:13:02.423-0500        DEBUG        [publisher]        memqueue/ackloop.go:128        ackloop: return ack to broker loop:2
        Feb 06 12:13:02 systemname filebeat[2425]: 2020-02-06T12:13:02.423-0500        DEBUG        [publisher]        memqueue/ackloop.go:131        ackloop:  done send ack
        Feb 06 12:13:02 systemname filebeat[2425]: 2020-02-06T12:13:02.423-0500        DEBUG        [acker]        beater/acker.go:69        stateless ack        {"count": 2}
        Feb 06 12:13:08 systemname filebeat[2425]: 2020-02-06T12:13:08.499-0500        DEBUG        [input]        input/input.go:152        Run input

We ran a packet capture on the filebeat node and confirmed the elastic search cluster was receiving the message and responding with an HTTP 200. From the Packet Capture's HTTP Stream:

POST /_bulk HTTP/1.1
Host: elasticSearchNodeHere:9200
User-Agent: Go-http-client/1.1
Content-Length: 2862
Accept: application/json
Content-Type: application/json; charset=UTF-8
Accept-Encoding: gzip

{"create":{"_index":"filebeat-7.5.2","pipeline":"filebeat-7.5.2-panw-panos-pipeline"}}
{"@timestamp":"2020-02-06T15:12:37.000Z","event":{"timezone":"-05:00","created":"2020/02/06 10:12:37","severity":6,"module":"panw","dataset":"panw.panos"},"fileset":{"name":"panos"},"input":{"type":"syslog"},"ecs":{"version":"1.1.0"},"agent":{"ephemeral_id":"8c77978b-b0b1-4df9-ac7f-ec835acaf2b6","hostname":"systemname","id":"d1a2322e-a921-4dff-8b15-7c527ed20097","version":"7.5.2","type":"filebeat"},"hostname":"dataHere","log":{"source":{"address":"someaddress:someport"}},"tags":["pan-os"],"host":{"name":"systemname","hostname":"systemname","architecture":"x86_64","os":{"platform":"rhel","version":"7.7 (Maipo)","family":"redhat","name":"Red Hat Enterprise Linux Server","kernel":"3.10.0-1062.12.1.el7.x86_64","codename":"Maipo"},"id":"78dc6a03fa634f6b883b97250d1a90fe","containerized":false},"_temp_":{"message_type":"SYSTEM","message_subtype":"general","generated_time":"2020/02/06 10:12:37"},"syslog":{"facility":1,"facility_label":"user-level","priority":14,"severity_label":"Informational"},"service":{"type":"panw"},"message":"1,2020/02/06 10:12:37,002201001018,SYSTEM,general,0,2020/02/06 10:12:37,,general,,0,0,general,informational,\"FqdnRefresh job enqueued. Enqueue time=2020/02/06 10:12:37. JobId=204605.  . Type: Full\",2950462,0x0,0,0,0,0,,ifw-pa5050-fo","observer":{"serial_number":"002201001018"}}
{"create":{"_index":"filebeat-7.5.2","pipeline":"filebeat-7.5.2-panw-panos-pipeline"}}
{"@timestamp":"2020-02-06T15:12:37.000Z","syslog":{"priority":14,"severity_label":"Informational","facility":1,"facility_label":"user-level"},"event":{"created":"2020/02/06 10:12:37","severity":6,"dataset":"panw.panos","module":"panw","timezone":"-05:00"},"fileset":{"name":"panos"},"tags":["pan-os"],"agent":{"id":"d1a2322e-a921-4dff-8b15-7c527ed20097","version":"7.5.2","type":"filebeat","ephemeral_id":"8c77978b-b0b1-4df9-ac7f-ec835acaf2b6","hostname":"systemname"},"ecs":{"version":"1.1.0"},"host":{"name":"systemname","hostname":"systemname","architecture":"x86_64","os":{"codename":"Maipo","platform":"rhel","version":"7.7 (Maipo)","family":"redhat","name":"Red Hat Enterprise Linux Server","kernel":"3.10.0-1062.12.1.el7.x86_64"},"id":"78dc6a03fa634f6b883b97250d1a90fe","containerized":false},"message":"1,2020/02/06 10:12:37,002201001018,SYSTEM,general,0,2020/02/06 10:12:37,,general,,0,0,general,informational,\"FqdnRefresh job started processing. Dequeue time=2020/02/06 10:12:37. Job Id=204605.   \",2950463,0x0,0,0,0,0,,ifw-pa5050-fo","hostname":"systemname","log":{"source":{"address":"someaddress:someport"}},"service":{"type":"panw"},"input":{"type":"syslog"},"observer":{"serial_number":"002201001018"},"_temp_":{"generated_time":"2020/02/06 10:12:37","message_type":"SYSTEM","message_subtype":"general"}}

Elastic Searches Response:

HTTP/1.1 200 OK
content-type: application/json; charset=UTF-8
content-encoding: gzip
content-length: 188

I noticed it was using the "filebeat-7.5.2-panw-panos-pipeline" pipeline, and this seems to process without issue (I guess?). I checked to make sure the pipeline was intact:

"filebeat-7.5.2-panw-panos-pipeline" : {
    "on_failure" : [
      {
        "set" : {
          "field" : "error.message",
          "value" : "{{ _ingest.on_failure_message }}"
        }
      },
      {
        "remove" : {
          "ignore_missing" : true,
          "field" : [
            "_temp_"
          ]
        }
      }
    ],
(snip)

In elastic search, I can see there is an index:

# curl -X GET localhost:9200/_cat/indices
green open filebeat-7.5.2-2020.02.06-000001 foEkBgCNTe-AhdmXKWhC-w 1 1    0  0   566b    283b
green open .kibana_task_manager_1           at0InUYVTzaJiqRpXoATng 1 1    2  1 32.5kb  16.2kb
green open .apm-agent-configuration         GM60UyQvSGuaVmYuibxCtQ 1 1    0  0   566b    283b
green open .kibana_1                        bV0kNdxbRmuKTpGvkwBe5w 1 1 1059 47  1.2mb 598.3kb

But looking in the index there are no documents:

GET /filebeat-7.5.2-2020.02.06-000001/_search
{
    "query": {
        "match_all": {}
    }
}

and this returned:
{
"took" : 2,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 0,
"relation" : "eq"
},
"max_score" : null,
"hits" :
}
}

I cant see it in kibana or doing a search within elastic search. The document count remains at 0, though there is clearly data coming in. I checked to make sure filebeat, kibana, and elastic search are all on the same version, they are. I shut everything down, removed all data inside elastic search's data directory, started elastic search, kibana, filebeat, confirmed file beat loaded the pipeline, then re-added the dashboards in Kibana but the same problem persists. At first I thought it was something to do with the pipeline, but this is happening on every filebeat module I have enabled on every system. I am at a loss. Can you provide assistance. Thanks.

Apologies, here is what elastic search returned:

HTTP/1.1 200 OK
content-type: application/json; charset=UTF-8
content-encoding: gzip
content-length: 183
{"took":0,"ingest_took":1,"errors":false,"items":[{"create":{"_index":"filebeat-7.5.2","_type":"_doc","_id":"auto-generated","_version":-4,"result":"noop","_shards":{"total":0,"successful":0,"failed":0},"status":200}}]}

Also, I have posted this request in the elastic search section as well. They are looking at the pipeline, and I originally thought this related to the pipeline, but the system module uses its own pipline, and it has the same issue. I'm wondering if this is related to the index itself, or something at that level? My elastic search post is at:
Elastic thread for this issue

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.