I'm part of an organization that is considering using packetbeat. We've tested it in our development environment and we have found it works great! We have run into one problem though, our network architects are transitioning our infrastructure into a topology in which there are no static points where we could capture data and ensure that all traffic is being sniffed.
One idea we had was to install packetbeat on each box in production and have it send the data to our ELK cluster. This seems like a rather ungraceful approach and I was wondering if anyone had any recommendations or advice regarding the situation.
In general we encourage people to install Packetbeat as an agent on the active servers, so this is not necessarily a problem.
There's one gotcha that you should be aware of: If you install Packetbeat on each box in your production, then you might end up with having duplicated transactions in Elasticsearch. For example, if you have two servers A & B, having this workflow: A->B and you install Packetbeat on both servers, A & B, then you will see the transaction between A and B twice, once seen by server A and one seen by B. You can use the ignore_outgoing configuration option to remove the duplicates.
So it is best to choose a set of servers that see all the relevant traffic, and only install on those. These are typically the application servers or middle-tier.
Let us know if you have any concerns.