Configuring Packetbeat with Logstash and GeoIP

First, hello all! I am a bit new to the elastic stack as a whole but am learning quickly thanks to the great documentation available. One piece of documentation I've found lacking is how exactly to configure packetbeat to run with logstash along with the geoIP filter capabilities.

I've configured my packetbeat to output to logstash and I can get the packetbeat info in kibana, however I get Error in visualisation [esaggs] > "field" is a required parameter errors in kibana when I look under the [Packetbeat] Overview ECS visualization or most visualizations for that matter. I initially thought this was a problem with templating, so I turned off packetbeat's automatic elasticsearch templating and instead load the packetbeat.template.json with logstash. However this error still remains under either method I try.

I was wondering if there's anything that I'm doing wrong or if I'm just missing something? I do not think it should be this hard to configure packetbeats to work with logstash as I feel that's a pretty common use case.

Here is my packetbeat.yml:

#============================== Network device ================================
packetbeat.interfaces.device: any
# Currently uses pcap (libpcap) but we can change to afpacket. 
# packetbeat.interfaces.type: af_packet

# Not sure if we need this with docker?
packetbeat.interfaces.auto_promisc_mode: true

#================================== Flows =====================================
  timeout: 30s
  period: 10s

#========================== Transaction protocols =============================
- type: icmp

- type: dns
  ports: [53]
  include_authorities: true
  include_additionals: true

- type: http
  ports: [80, 8080, 8000, 5000, 8002]
  send_headers: true
  send_all_headers: true

- type: tls
  ports: [443]
  send_certificates: false

#=========================== Monitored processes ==============================
packetbeat.procs.enabled: true

#================================ Processors ===================================
# For example, you can use the following processors to keep the fields that
# contain CPU load percentages, but remove the fields that contain CPU ticks
# values:
# DNS Reverse Lookup
- dns:
    type: reverse
      source.ip: source.hostname
      destination.ip: destination.hostname

# Host Metadata like the location (we can decide) of our host.
- add_host_metadata:
    netinfo.endabled: true
      location: -82.8628, -74.0060
      continent_name: Antarctica
      country_iso_code: AQ
      city_name: Anchorage
      name: myLocation

# The following example enriches each event with metadata from the cloud provider about the host machine.
- add_docker_metadata:
    host: "unix:///var/run/docker.sock"
- add_cloud_metadata: ~
- add_locale: ~

#============================= Logstash Output =================================
  hosts: ["${LOGSTASH_HOST}"]
  username: ${ES_USERNAME}
  password: ${ES_PASSWORD}

#========================== Elasticsearch Output ===============================
#  hosts: ["${ELASTICSEARCH_HOST}"]
#  username: ${ES_USERNAME}
#  password: ${ES_PASSWORD}

# ================================== Template ==================================
# Our output is logstash, not elasticsearch so we need to load the templates
# in logstash.
setup.template.enabled: false
setup.template.overwrite: false

#============================== Dashboards =====================================
  enabled: true
      enabled: true
      maximum: 0 #unlimited

#============================== Kibana =========================================
  host: "${KIBANA_HOST}"
  username: ${ES_USERNAME}
  password: ${ES_PASSWORD}

#============================== Xpack Monitoring ===============================
  enabled: false

And here is my logstash pipeline.yml:

input {
  beats {
    port => "${BEATS_PORT:5044}"

filter {
  geoip {
    id => "geoip_client"
    source => "[client][ip]"
    target => "[client][geo]"
    database => "/usr/share/logstash/config/geoipdbs/GeoLite2-City.mmdb"
  geoip {
    id => "geoip_source"
    source => "[source][ip]"
    target => "[source][geo]"
    database => "/usr/share/logstash/config/geoipdbs/GeoLite2-City.mmdb"
  geoip {
    id => "geoip_destination"
    source => "[destination][ip]"
    target => "[destination][geo]"
    database => "/usr/share/logstash/config/geoipdbs/GeoLite2-City.mmdb"
  geoip {
    id => "geoip_server"
    source => "[server][ip]"
    target => "[server][geo]"
    database => "/usr/share/logstash/config/geoipdbs/GeoLite2-City.mmdb"
  geoip {
    id => "geoip_host"
    source => "[host][ip]"
    target => "[host][geo]"
    database => "/usr/share/logstash/config/geoipdbs/GeoLite2-City.mmdb"

output {
  elasticsearch {
    hosts => ["${ES_HOST}"]
    index => "%{[@metadata][beat]}-%{[@metadata][version]}"
    user => "${ES_USER}"
    password => "${ES_PASSWORD}"
    template => "/usr/share/logstash/config/templates/packetbeat.template.json"
    template_name => "packetbeat-7.8.0"
    template_overwrite => true

I am running this on docker in case that helps with any info. I have been successful with some of the GeoIP location stuff for destination but that seems to be it.

What am I missing? Why am I still getting Kibana Visualization errors?

Other kibana errors I get include: Could not locate that index-pattern-field (id: and Saved "field" parameter is now invalid. Please select a new field. across almost all the visualizations.

This is expected, because the is needed by the [Packetbeat] Overview ECS.
To get the geo-ip from packetbeat just follow this:

So, you don't need logstash anymore and make it more simple.

Fadjar Tandabawana

Thank you for your reply Fadjar, I understand fhat is needed for this visualization but my question is why is it not appearing in the index? Is this a template error or something else? Also I understand that I can use the elasticsearch geoip module however my use case eventually involves a good deal of data manipulation that requires logstash (because it has some modules that will be necessary and for integrating other beats), and I want to deploy automatically, which I'm not sure how to automate with the elasticsearch module as it requires us to input that ingest pipeline.