Help/Advice needed setting up geo-ip filters in an on-prem Logstash to SIEM in Elastic Cloud instance

Hi @ronmer

Thanks for the detailed explanation and questions.

Quick Recap:

  • So you want architecture B cool that is very common.
    Packetbeat -> Logstash -> Elasticsearch in Elastic Cloud

  • With this architecture you do not need to do the GEO IP in Logstash if you are using Packetbeat. (I will show you below)

  • With Logstash on prem you can have the exact same experience as you had on cloud following simple instructions below, very very little work.

  • Good you understand Private IP.

  • You do not need to enable the GEO IP Processor on Elastic Cloud as it is loaded / enabled by default.

Solution:
So actually there is very little you need to do to make this all work, we will use Logstash as a pasthrough and let all the Packetbeat module do its work, ECS formating, Templates, GEO IP, Index Lifecycle Management, it will all be taken care of.

Here is what I recommend, try to resist the urge to make this more complex that it needs to be.

  1. One a single host Perform Steps 1 - 5 on the Packetbeat Quckstart page for Elastic Search Service.

    This will setup Packetbeat and all the associated assets in Elasticsearch and Kibana.
    Note Setup only needs to run Once whether you are setting up on 1 host or 1000 hosts, it just loads all the needs artifacts. and If you already did all this.. .and you still have the the cluster you don't even need to do it again.

  2. Now in the packetbeat.yml comment out cloud.id and cloud.auth: and configure the output section of packetbeat to point to logstash. Comment out the output.elasticsearch: section. Now Packetbeat is pointed to your on prem Logstash

EDIT : CORRECTED WITH CORRECT PIPELINE SEE BELOW.

    output.logstash:
      # The Logstash hosts
      hosts: ["localhost:5044"]
      pipeline: geoip-info
      ...
  1. Setup Logstash. Below is the logstash-beats-es.conf that will support all the beats functionality. Logstash simply acts as a passthough, Packetbeat functionality will magically get passed through.

  2. Start Logstash then start Packetbeat... take a look...data should start to flow exactly as it did when it was Packetbeat to Elastic Cloud direct.

  3. Deploy Packetbeat on other hosts. Configure to point at this Logstash.

Logstash Config for Beats Pass through.

################################################
# beats->logstash->es default config.
################################################
input {
  beats {
    port => 5044
  }
}

output {
  if [@metadata][pipeline] {
    elasticsearch {
      cloud_auth => "elastic:password"
      cloud_id => "mycloud:dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRj......"

      manage_template => false
      index => "%{[@metadata][beat]}-%{[@metadata][version]}"
      pipeline => "%{[@metadata][pipeline]}" 
    }
  } else {
    elasticsearch {
      cloud_auth => "elastic:password"
      cloud_id => "mycloud:dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRj......"
      manage_template => false
      index => "%{[@metadata][beat]}-%{[@metadata][version]}"
    }
  }
}