To make a table showing top hosts doing big outbound data transfer


I want to make a table or a graph that shows top hosts / IPs uploading a large amount of data transfer to the outside domains.

Probably I want a table/metric that shows:

  1. Out_Bytes

  2. Source IP addresses (of internal hosts initiating outbound data transfer)

  3. Destination IP addresses (of external hosts over WAN in outside domains)

Is this possible?

I looked up various sample visualizations that came with Packetbeat and couple of them seem close to what I'm trying to do but the numbers of the transfer amount didn't make sense to me.

Could someone please give me a good direction?

Thank you very much in advance!

  • Young

(Brandon Kobel) #2

Hey @learner are you using the packet beat data to try to build this visualization, if not would you mind providing the relevant fields from your index pattern that contain this information?


Hi Brandon,

Yeah, I was trying to do it with Packetbeat wtih no success but am open to use whatever other package that suits what I'm trying to do.

With one of Packetbeat's sample dashboards, I was trying to use:

[Packetbeat] Flows

In that dashboard, the metric at the very bottom is:

Network traffic between your hosts

That's what I was trying to use but the numbers shown in the "Source traffic" and "Destination traffic" fields don't seem to accurately reflect my perception on our actual network traffic.

That is, some traffic numbers between an internal host and an external host are way too big, exceeding our WAN link bandwidth (1Gbps), making me think that it's not possible and thus the numbers are not accurate.

Maybe I'm having some misunderstanding on the Bytes unit or so.... I don't know and am trying to figure it out at the same time while asking here to find a new or better way that would generate numbers that make sense.


  • Young

(Brandon Kobel) #4

Hey @learned, these charts are showing the total flow of bytes that were sent/received over a specific duration. So, if you're viewing the [Packetbeat] Flows dashboard for the past "15 Minutes" (as configured via the time filter in the upper right corner), you'll see the total bytes that were sent during that duration. The 15 GB of traffic over 15 minutes that you're seeing is well under 1 Gbps.


Hi Brandon,

I was looking at the total, not just one line.

For example, this is what I just captured from the last 15m:


My 1Gbps WAN link = theoretically 125 MB/s ---> realistically 70 MB/s after overhead

70 MB/s * (15m * 60s/m) = 63000 MB = 63 GB --> My link's realistic maximum traffic over WAN for 15m

But the above chart shows the total of 106.586 GB for 15m.

Moreover, for the last 12h,


Total in the above shows 19.454 TB for 12h.

My calculation:

70 MB/s * (12h * 3600s/h) = 3,024,000 MB = 3 TB --> My link's realistic maximum traffic over WAN for 12h

Even the 1st number at the top already shows 16.394 TB, which is >>> my max 3 TB for 12h.

And, each of these lines is for between an internal host (LAN) and an outside host (WAN).

That's where my great confusion is.

Regarding the unit of the fields, I've selected "Bytes", as below, without modifying what appeared by default when selecting "Bytes", for example:


  • Young


Since I posted my messages, I've wiped out everything, all the Elastic Stack packages, PacketBeat, and the data directory, and reinstalled Elastic Stack 6.1 and started everything fresh.

Kibana still shows non-sense numbers, like before.

That is, Kibana's PacketBeat network traffic numbers in bytes that far exceed our maximum WAN throughput (1Gbps) to the outside world.

Anyone having similar problems?


  • Young


Actually, from another thread, as below, where I had posted and got responses, applying the "final: true" filter is making the data numbers show correctly now for me.

I had tried it at that time and thought that it was showing wrong numbers in a different way, for which I think I did something wrong back then.

Turns out, it's the key to my issue.

Thank you very much for all of your responses!

  • Young

(system) #9

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.