Ability to configure beats, logstash, kibana, and ES to output their logs to either LS or ES

With the elastic-stack, I have found myself in the same situation when designing processing pipelines and indices for new data sources.

Case 1: New Data Development

  1. Set up a LS instance, with an initial set of configs
  2. Set up a beat, and point it to the LS instance
  3. Set up an index template for the new index in ES (initially, this is with dynamic field generation turned on).

Once this setup is in place, I'll iterate over the different pieces.
As I do so, I am typically tailing the logs from each piece of the stack to ensure that I am not missing any errors.

Case 2: Deploying Beats to 100+ machines

Assume:

  • Full stack is in place and operational.
  • All beats are shipping to LS.

Workflow

  1. For some reason, I need to make a configuration change various beats which are deployed to 100+ machines.
  2. I may have to make another configuration change to LS as well.
  3. I make the changes, restart the changed components
  4. Watch, wait, perform auditing to make sure changes worked, and new shippers are shipping.

The auditing is a combination of manual work through Sense, home-brewed auditing services, monitoring Kibana dashboards.

The problem is, we are talking about HUNDREDS of shippers.

It would be great if the logs could also be part of the auditing in these cases.