Complete Example - Adding Private IPs for Internal Networks


(Eric D Barnes) #1

I recognize there are a lot of posts on here about this topic but they are (in my opinion) partially complete. In general they lack (or I feel they lack) context around the full environment or the rest of the config file with explanations. But please stay with me. I'm hoping we can discuss from Start to Finish a solution to solve this use case:

  1. Filebeat is running on a Redhat 7 server/workstation with private addressing.
  2. Logstash is brand new and only passing data to Elasticsearch using the base Beats input method.
  3. The ingest pipeline on Elasticsearch is all set to use GeoIP and works for Public addresses going beat -> Ingest node or beat -> logstash -> elastic.
  4. We now want to enrich that data by adding geo-ip for private addresses that are geo separated and a tag saying what network segment the traffic is coming from.
  5. Example networks to build on: 10.0.0.0/24 - San Diego, CA (Management Network), 10.0.1.0/24 - Los Angeles, CA (Marketing), 10.1.0.0/24 - Houston, TX (DR Site)

If we are new to logstash, new to beats, or new to elastic. And you have some basic stuff running, can we provide a FULL end to end configuration, with explanations for HOW we are doing what we are doing? If we can build a full config, using the examples and explain how and why it works, we can translate variables (ip range, location etc) to what everyone is actually asking - which is, how do we make this work. Or maybe we can say, all things being equal and with a brand new deployment, this is the absolute, most efficient way to do this at scale.


(Eric D Barnes) #2

As a starting reference, here is the base config file in /etc/logstash/conf.d/

This is copy/paste out of the published 6.4 logstash documents online.

All Elastic components are running the Redhat 7.2 with Elastic 6.4.2-1 for all packages (beats, logstash, elasticsearch, kibana)

File name is beats.conf (but this seems arbitrary).
--- Starter Config Meant to take Filebeat data and dump it into Elasticsearch using the default Filebeat ingest pipeline and index (rather than some logstash index name) to ensure all the default dashboards continue to work ---
+++++++++++++++++++++++++++==
input {
beats {
port => 5044
}
output {
elasticsearch {
hosts => "somehostname.domain.com"
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
}

+++++++++++++++++++++++++


(system) #3

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.