Packetbeat flows - start_time vs last_time

(Stefan Geißler) #1

Hi everyone,

I am currently in the process of deploying Packetbeat to capture flow data of udp as well as tcp flows. Thereby, Packetbeat instances are running on the receiving side as well as the sending side of the generated flows.

I have configured Packetbeat as follows.

packetbeat.interfaces.device: any
packetbeat.interfaces.type: af_packet

  enabled: true
  timeout: 30s
  period: 10s

  hosts: ["http://${ES_IP}:9200"]
  username: "elastic"
  password: "changeme"

Now, this results in an entry every 10 seconds showing cumulated packet and byte counters for each flow and one additional entry after the timeout of 30 seconds has passed in which the final field is set to true.
Unfortunately, the start_time and end_time fields of all of these entries hold the exact same value. Hence, all flows have duration 0.

Is there an error in my configuration or do I simply misinterpret the output of Packetbeat in this context?

Thank you very much in advance,

(Steffen Siering) #2

The start_time is set to the first packet seen on the connection. The end_time to the last packet seen on the connection. After a duration of packetbeat.flows.timeout without any packets received a flow will time out and a final event will be generated.

How many packets do you have in your flows?
Have you checked packets logs for any errors?

(Stefan Geißler) #3

Ok - So that means I actually interpreted the output fields the right way and just have an error somewhere in my setup.

I have checked the Packetbeat log and it looks approximately like this.

2016/11/04 08:09:00.088158 logp.go:230: INFO Non-zero metrics in the last 30s: libbeat.publisher.messages_in_worker_queues=3 libbeat.publisher.published_events=42
2016/11/04 08:09:30.088118 logp.go:230: INFO Non-zero metrics in the last 30s: libbeat.publisher.published_events=38 libbeat.publisher.messages_in_worker_queues=3
2016/11/04 08:10:00.088197 logp.go:230: INFO Non-zero metrics in the last 30s: libbeat.publisher.messages_in_worker_queues=3 libbeat.publisher.published_events=45
2016/11/04 08:10:30.088278 logp.go:230: INFO Non-zero metrics in the last 30s: libbeat.publisher.messages_in_worker_queues=3 libbeat.publisher.published_events=52

The flows i am currently trying to store are completely artificial. I am using plain python to generate UDP packets on a local host that is also running Packetbeat. The packet count thereby was between 10 and 10000 packets. The number of packets in Kibana is correct.

I probably have to mention that the traffic generator as well as Packetbeat are running in Docker containers with host networking enabled.

Edit: I attached a screenshot of the result of a 100k packets flow.

(Steffen Siering) #4

It's nice you're using a packet generator. Maybe you can try to capture a simple trace using tcpdump and replay the dump in packetbeat via -I <pcap file>. If you can share the dump I can also have a look if the timestamps make sense.

(Stefan Geißler) #5

Hi again,

sorry for the delayed answer. Here you can find the dump containing only the packets generated by my traffig-gen:

It seems the packets receive a proper timestamp in tcpdump as well as when creating the dump with packetbeat -dump <file>.

Further, the start_time and end_time have equal values even for flows not created by the traffic generator (e.g. consul communication).

I stumbled upon this issue on github #1337 that seems to deal with a similar problem. There, instead of live capturing traffic, a trace is replayed.

Any help is greatly appreciated!

(Steffen Siering) #6

Thanks, I will have a look. The ticket you're referencing is for replaying a tcpdump using time.Now(). That is, if replay is speed up delays are wrong + timestamps in events do not match the timestamps found in the pcap.

(system) #7

This topic was automatically closed after 21 days. New replies are no longer allowed.