Packetbeat processing packets but not sending them to Elasticsearch

I am using the ELK stack in an attempt to send a PCAP file down the pipeline to Kibana. I am running Packetbeat with './packetbeat -e -c packetbeat.yml -I test.pcap -d -t "publish". The PCAP file is 306 packets long, and I get the logs:

2016/06/20 15:10:58.972346 geolite.go:24: INFO GeoIP disabled: No paths were set under output.geoip.paths
2016/06/20 15:10:58.972540 logstash.go:106: INFO Max Retries set to: 3
2016/06/20 15:10:58.974195 outputs.go:126: INFO Activated logstash as output plugin.
2016/06/20 15:10:58.974243 publish.go:288: INFO Publisher name: NGPs-MacBook-Pro.local
2016/06/20 15:10:58.974311 async.go:78: INFO Flush Interval set to: 1s
2016/06/20 15:10:58.974317 async.go:84: INFO Max Bulk Size set to: 2048
2016/06/20 15:10:58.974347 beat.go:147: INFO Init Beat: packetbeat; Version: 1.2.3
2016/06/20 15:10:58.975186 beat.go:173: INFO packetbeat sucessfully setup. Start running.
2016/06/20 15:11:23.404683 sniffer.go:359: INFO Input finish. Processed 306 packets. Have a nice day!
2016/06/20 15:11:23.404723 beat.go:183: INFO Cleaning up packetbeat before shutting down.

It seems like Packetbeat is processing the packets, but the 306 packets do not show up when I view from Kibana. I can see packets coming in when I run ./packetbeat -e -c packetbeat.yml -d "publish", so I am not sure where the issue is. Any help would be greatly appreciated.

306 is the total number of packets received from the sniffer (i.e. total packets in PCAP or collected from wire). That number does not correlate to the number of events sent by Packetbeat to Elasticsearch.

The command you are running is a little wrong because -d and "publish" should be side by side since "publish" is the argument to -d. So use ./packetbeat -e -c packetbeat.yml -I test.pcap -t -d "publish" This should show you want events are being published to Elasticsearch.

What's in the PCAP file? Is it data from one of the protocols supported by Packetbeat?

Looking at the PCAP file in wireshark, I see DNS, TCP, STP, LDAP, HTTP, TLSv1.2, ARP, and NTP protocols. When processing, I believe the sniffer breaks on the first packet, possibly with this log:
http_parser.go:157: DBG Couldn't understand HTTP request: njT??/? ... with a long string of question marks and odd characters. If the sniffer cannot parse a packet will it stop the transfer? Or will it just not send that packet along the pipeline.

It's likely that the PCAP file doesn't contain the complete HTTP request if the capture was started in the middle. In this case Packetbeat will drop the current data from that TCP stream and retry parsing on the next segment.

This topic was automatically closed after 21 days. New replies are no longer allowed.