I am currently in the process of deploying Packetbeat to capture flow data of udp as well as tcp flows. Thereby, Packetbeat instances are running on the receiving side as well as the sending side of the generated flows.
Now, this results in an entry every 10 seconds showing cumulated packet and byte counters for each flow and one additional entry after the timeout of 30 seconds has passed in which the final field is set to true.
Unfortunately, the start_time and end_time fields of all of these entries hold the exact same value. Hence, all flows have duration 0.
Is there an error in my configuration or do I simply misinterpret the output of Packetbeat in this context?
The start_time is set to the first packet seen on the connection. The end_time to the last packet seen on the connection. After a duration of packetbeat.flows.timeout without any packets received a flow will time out and a final event will be generated.
How many packets do you have in your flows?
Have you checked packets logs for any errors?
Ok - So that means I actually interpreted the output fields the right way and just have an error somewhere in my setup.
I have checked the Packetbeat log and it looks approximately like this.
...
2016/11/04 08:09:00.088158 logp.go:230: INFO Non-zero metrics in the last 30s: libbeat.es.publish.read_bytes=1183 libbeat.es.publish.write_bytes=26006 libbeat.publisher.messages_in_worker_queues=3 libbeat.es.published_and_acked_events=42 libbeat.publisher.published_events=42 libbeat.es.call_count.PublishEvents=3
2016/11/04 08:09:30.088118 logp.go:230: INFO Non-zero metrics in the last 30s: libbeat.es.publish.read_bytes=1175 libbeat.es.published_and_acked_events=38 libbeat.es.publish.write_bytes=23850 libbeat.publisher.published_events=38 libbeat.publisher.messages_in_worker_queues=3 libbeat.es.call_count.PublishEvents=3
2016/11/04 08:10:00.088197 logp.go:230: INFO Non-zero metrics in the last 30s: libbeat.es.call_count.PublishEvents=3 libbeat.es.publish.read_bytes=1207 libbeat.es.publish.write_bytes=28098 libbeat.es.published_and_acked_events=45 libbeat.publisher.messages_in_worker_queues=3 libbeat.publisher.published_events=45
2016/11/04 08:10:30.088278 logp.go:230: INFO Non-zero metrics in the last 30s: libbeat.es.call_count.PublishEvents=3 libbeat.publisher.messages_in_worker_queues=3 libbeat.es.published_and_acked_events=52 libbeat.es.publish.write_bytes=31996 libbeat.es.publish.read_bytes=1215 libbeat.publisher.published_events=52
...
The flows i am currently trying to store are completely artificial. I am using plain python to generate UDP packets on a local host that is also running Packetbeat. The packet count thereby was between 10 and 10000 packets. The number of packets in Kibana is correct.
I probably have to mention that the traffic generator as well as Packetbeat are running in Docker containers with host networking enabled.
Edit: I attached a screenshot of the result of a 100k packets flow.
It's nice you're using a packet generator. Maybe you can try to capture a simple trace using tcpdump and replay the dump in packetbeat via -I <pcap file>. If you can share the dump I can also have a look if the timestamps make sense.
Thanks, I will have a look. The ticket you're referencing is for replaying a tcpdump using time.Now(). That is, if replay is speed up delays are wrong + timestamps in events do not match the timestamps found in the pcap.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.