GeoIP enrichment not working

I am trying to setup Geolocation based on IP. I am conecting packetbeat to elastic cloud directly.
I have followed instruction from :
https://www.elastic.co/guide/en/beats/packetbeat/master/packetbeat-geoip.html
But I can not make it populate my on overview dashboard.

I have created a geoip process with console in kibana. I have added geoip-info pipeline in configuration file. But still nothing, I can not see client.geo.location and host.geo.location fields.

Am I missing something ? Can sb please advise ?

Please do not use the master branch, it's for unreleased software so things may not apply.
Use either current or the version you are running.

It'd help if you shared your configuration.

1 Like

Did you run setup? Before ingesting data?

packetbeat setup -e

Are the IPs you are sending expecting to see public IPs, internal /private IPs will not. work only public IPs will result in GEO IPs.

Assuming you. used the instructions on the page you noted you can test like this

Use your index name... but the rest exactly the same

POST packetbeat-7.12.1-2021.05.27-000001/_doc/myspecialid1234?pipeline=geoip-info
{
  "client": {
    "ip": "8.8.8.8"
  },
  "destination": {
    "ip": "8.8.8.8"
  },
  "host": {
    "ip": "8.8.8.8"
  }
}

GET packetbeat-7.12.1-2021.05.27-000001/_doc/myspecialid1234

result

# GET packetbeat-7.12.1-2021.05.27-000001/_doc/myspecialid1234
{
  "_index" : "packetbeat-7.12.1-2021.05.27-000001",
  "_type" : "_doc",
  "_id" : "myspecialid1234",
  "_version" : 1,
  "_seq_no" : 1411,
  "_primary_term" : 4,
  "found" : true,
  "_source" : {
    "destination" : {
      "geo" : {
        "continent_name" : "North America",
        "country_iso_code" : "US",
        "country_name" : "United States",
        "location" : {
          "lon" : -97.822,
          "lat" : 37.751
        }
      },
      "ip" : "8.8.8.8"
    },
    "host" : {
      "geo" : {
        "continent_name" : "North America",
        "country_iso_code" : "US",
        "country_name" : "United States",
        "location" : {
          "lon" : -97.822,
          "lat" : 37.751
        }
      },
      "ip" : "8.8.8.8"
    },
    "client" : {
      "geo" : {
        "continent_name" : "North America",
        "country_iso_code" : "US",
        "country_name" : "United States",
        "location" : {
          "lon" : -97.822,
          "lat" : 37.751
        }
      },
      "ip" : "8.8.8.8"
    }
  }
}

then clean up...

DELETE packetbeat-7.12.1-2021.05.27-000001/_doc/myspecialid1234

If that does not work then you don't have the geoip-info pipeline set up correctly

And as @warkolm strongly suggested make sure you look at the correct version of the documentation for the version of the stack / beats you are using.

1 Like

Thank you warkolm, I will stick to your suggestion.

Hi Stephen. I have tried post and get method according with your suggestions and I get same results you posted. This means my pipeline-info is set up correctly.

I have also tried to skip logstash and set up beat to directly communicate with ES cloud, so I can test connection (maybe I am setting something wrong with pipelines - this concepts are still confusing for me).

Are all public connections (outbound and inbound) should appear on map as long as they are public ip's?

Also, I have just noticed, after running suggested requests in dev tools I am getting these errors
image
image

PS. would it be easier to do it with filter in logstash?

This usually mean that the packbeat index template / mapping is incorrect.

Clean out all you our packbeat index.

Run Setup and then try again.

packetbeat setup -e

What version are you running beats, elastcsearch, kibana are they all on the same version

Ok, so I deleted all packet beat indexes with DELETE packetbeat-*
stopped service, ran setup again and restarted packetbeat service.
After this I went to packet beat dashboard and was not getting the error but also was not able to see anything on the map. So I ran suggested POST and GET request and now I am getting errors again.

packetbeat: 7.13.1, ES and Kibana v7.13.0

What errors... :slight_smile: from where @farciarz121
You need provide the errors if you want help

Wrong Order in my opinion.
Stop the service
Then delete the index
Then Run Setup
Then Start again

If you delete before stopping some documents were probably written in between.

Sure you can do this in logstash .... if you want...

The error I have sent you a screenshot few post up.
Did it again, in the right order this time :wink:
I run couple of https request, icmp request to 8.8.8.8 but still nothing on the dashboard map. I have no clue what am I doing wrong.

Ok I think I have something we can use as a clue to troubleshoot it...
According to the article I have mentioned about in my first post
If the lookups succeed, the events are enriched with geo_point fields, such as client.geo.location and host.geo.location, that you can use to populate visualizations in Kibana.

I see traffic coming in discovery but there is no geo fields

PS in my yml file for packet beat I have an entry for
pipeline: geoip-info
and it can see it in Kibana under stack management

image

I don't know.

Are you collecting from the correct interfaces?
Is packbeat running on more than 1 host?
Are you stopping them all? before starting.. those errors mean the mapping is wrong again.
Have you just used discover and looks at some of the documents?
What happens when you go look at the hosts or network under Security Analytics?
you can start packetbeat manually like this.

packetbeat -e -d "*"

you will see what is published from packetbeat.

Here is another thread I wrote on this ... I follow my instructions and I get a map.
I did notice the network map under Security analytics is looking for a very specific field

  1. there is only one device packetbeat.exe devices shows so I have set interfaces.device: 0
  2. packetbeat is running only on 1 host
  3. yes
  4. Not sure what do you mean here
  5. Security analysis shows host I am running packet beat from.

I will look into your post, maybe this will help me to discover what is wrong.

Meanwhile, what filter I would need to add to logstash in order to make it work?

Go to Kibana -> Discover
Index Pattern
packbeat-*
Look at the actual documents.

or simple in the Dev Tools

GET packetbeat-7.13.0-0000001/_search

Logstash see here ... I am not sure that is going to be easier but give it a try..

There is something basic going on... Me ... first time I tried... I was only getting internal IPs...

Good Luck...

returns

{
  "error" : {
    "root_cause" : [
      {
        "type" : "index_not_found_exception",
        "reason" : "no such index [packetbeat-7.13.0-0000001]",
        "resource.type" : "index_or_alias",
        "resource.id" : "packetbeat-7.13.0-0000001",
        "index_uuid" : "_na_",
        "index" : "packetbeat-7.13.0-0000001"
      }
    ],
    "type" : "index_not_found_exception",
    "reason" : "no such index [packetbeat-7.13.0-0000001]",
    "resource.type" : "index_or_alias",
    "resource.id" : "packetbeat-7.13.0-0000001",
    "index_uuid" : "_na_",
    "index" : "packetbeat-7.13.0-0000001"
  },
  "status" : 404
}

GET _cat/indices/packetbeat-*/?v

returns

green  open   packetbeat-7.13.1-2021.06.07-000001 LnBeJZ_AR_qxub7owG8z8w   1   1       7396            0      8.8mb          4.5mb

I have also noticed that under stack management logstash pipeline has only one main pipeline. Ingest Node Pipelines have geoip-info. Is this normal ?

so do the search on your index... I don't know all your index names so you need to subsitute yours

GET packetbeat-7.13.1-2021.06.07-000001/_search

Easier is got to
Kibana -> Discover

You really need to learn some basics, looking at documents etc... that is the only way to debug.

I do get how to look/discover documents in Kibana, but sometimes it is just easier to run 1 liner in dev tools.

This is why in one of the posts I have mentioned that I can not find geo fields in documents

*please noticed I use absolute dates beyond the scope because my time zone is 3h off.

Jason form one of the documents

{
  "_index": "packetbeat-7.13.1-2021.06.07-000001",
  "_type": "_doc",
  "_id": "yNIh53kBhVdcPS60bJZa",
  "_version": 1,
  "_score": null,
  "fields": {
    "dns.type": [
      "answer"
    ],
    "dns.answers_count": [
      1
    ],
    "event.category": [
      "network_traffic",
      "network"
    ],
    "dns.question.subdomain": [
      "www"
    ],
    "host.os.name.text": [
      "Windows Server 2016 Standard Evaluation"
    ],
    "server.ip": [
      "192.168.1.10"
    ],
    "dns.answers.data": [
      "212xxx9"
    ],
    "host.hostname": [
      "Logstash-cloud"
    ],
    "type": [
      "dns"
    ],
    "host.mac": [
      "00:0c:29:54:98:3b",
      "00:00:00:00:00:00:00:e0"
    ],
    "dns.answers.type": [
      "A"
    ],
    "host.os.version": [
      "10.0"
    ],
    "dns.flags.authentic_data": [
      false
    ],
    "dns.flags.authoritative": [
      false
    ],
    "dns.additionals_count": [
      0
    ],
    "host.os.name": [
      "Windows Server 2016 Standard Evaluation"
    ],
    "dns.flags.checking_disabled": [
      false
    ],
    "source.ip": [
      "192.168.1.100"
    ],
    "agent.name": [
      "Logstash-cloud"
    ],
    "network.community_id": [
      "1:TqLqkXBsKAPb+XVMkuJLnbWKRFQ="
    ],
    "host.name": [
      "Logstash-cloud"
    ],
    "dns.answers.ttl": [
      83
    ],
    "event.kind": [
      "event"
    ],
    "dns.answers.class": [
      "IN"
    ],
    "host.os.type": [
      "windows"
    ],
    "method": [
      "QUERY"
    ],
    "resource": [
      "www.wp.pl"
    ],
    "query": [
      "class IN, type A, www.wp.pl"
    ],
    "client.ip": [
      "192.168.1.100"
    ],
    "agent.hostname": [
      "Logstash-cloud"
    ],
    "dns.answers.name": [
      "www.wp.pl"
    ],
    "tags": [
      "beats_input_raw_event"
    ],
    "host.architecture": [
      "x86_64"
    ],
    "dns.question.top_level_domain": [
      "pl"
    ],
    "dns.op_code": [
      "QUERY"
    ],
    "source.port": [
      63145
    ],
    "agent.id": [
      "bcc8d9d04f-6de817218886"
    ],
    "dns.flags.recursion_available": [
      true
    ],
    "bytes_out": [
      43
    ],
    "client.port": [
      63145
    ],
    "ecs.version": [
      "1.9.0"
    ],
    "agent.version": [
      "7.13.1"
    ],
    "destination.bytes": [
      43
    ],
    "host.os.family": [
      "windows"
    ],
    "event.start": [
      "2021-06-07T18:38:09.398Z"
    ],
    "dns.question.etld_plus_one": [
      "wp.pl"
    ],
    "dns.resolved_ip": [
      "212.77.98.9"
    ],
    "status": [
      "OK"
    ],
    "dns.question.class": [
      "IN"
    ],
    "server.bytes": [
      43
    ],
    "destination.port": [
      53
    ],
    "bytes_in": [
      27
    ],
    "event.end": [
      "2021-06-07T18:38:09.420Z"
    ],
    "dns.flags.recursion_desired": [
      true
    ],
    "host.os.build": [
      "14393.693"
    ],
    "host.ip": [
      "fe80::34f1:9575:daa9:b8e3",
      "192.168.1.100",
      "fe80::5efe:c0a8:164"
    ],
    "agent.type": [
      "packetbeat"
    ],
    "network.protocol": [
      "dns"
    ],
    "related.ip": [
      "192.168.1.100",
      "192.168.1.10",
      "212.77.98.9"
    ],
    "host.os.kernel": [
      "10.0.14393.693 (rs1_release.161220-1747)"
    ],
    "dns.header_flags": [
      "RD",
      "RA"
    ],
    "@version": [
      "1"
    ],
    "server.port": [
      53
    ],
    "dns.question.registered_domain": [
      "wp.pl"
    ],
    "network.bytes": [
      70
    ],
    "dns.authorities_count": [
      0
    ],
    "network.direction": [
      "egress"
    ],
    "dns.question.name": [
      "www.wp.pl"
    ],
    "host.id": [
      "46f9251b9-8490c1fa9c5b"
    ],
    "network.type": [
      "ipv4"
    ],
    "source.bytes": [
      27
    ],
    "dns.id": [
      "21188"
    ],
    "dns.question.type": [
      "A"
    ],
    "destination.ip": [
      "192.168.1.10"
    ],
    "network.transport": [
      "udp"
    ],
    "event.duration": [
      21774000
    ],
    "dns.flags.truncated_response": [
      false
    ],
    "@timestamp": [
      "2021-06-07T18:38:09.398Z"
    ],
    "host.os.platform": [
      "windows"
    ],
    "client.bytes": [
      27
    ],
    "event.type": [
      "connection",
      "protocol"
    ],
    "agent.ephemeral_id": [
      "907c6e151b649982033"
    ],
    "dns.response_code": [
      "NOERROR"
    ],
    "event.dataset": [
      "dns"
    ]
  },
  "highlight": {
    "dns.question.registered_domain": [
      "@kibana-highlighted-field@wp.pl@/kibana-highlighted-field@"
    ],
    "dns.question.etld_plus_one": [
      "@kibana-highlighted-field@wp.pl@/kibana-highlighted-field@"
    ]
  },
  "sort": [
    1623091089398
  ]
}

Example for that JSON doc there is only 1 ip

 "host.ip":

and that is an internal IP so no GEO IP Information

I just started from scratch setup packetbeat... and it is working.

Note no Map Either for me, that is because it is looking for client.geo.location I find that is not often filled in.

So Edit Dashboard
Edit the Map
Set to destination.geo.location

Then Save and Whalluh!

Gotta Run... Good Luck

I actually tired this already, by creating a new map. No luck....
I will try again. Thank you for trying

Can you share your packetbeat.yml

And you did put the pipeline in right

PUT _ingest/pipeline/geoip-info
{
  "description": "Add geoip info",
  "processors": [
    {
      "geoip": {
        "field": "client.ip",
        "target_field": "client.geo",
        "ignore_missing": true
      }
    },
    {
      "geoip": {
        "field": "source.ip",
        "target_field": "source.geo",
        "ignore_missing": true
      }
    },
    {
      "geoip": {
        "field": "destination.ip",
        "target_field": "destination.geo",
        "ignore_missing": true
      }
    },
    {
      "geoip": {
        "field": "server.ip",
        "target_field": "server.geo",
        "ignore_missing": true
      }
    },
    {
      "geoip": {
        "field": "host.ip",
        "target_field": "host.geo",
        "ignore_missing": true
      }
    }
  ]
}

yes I did and I am able to find it in kibana under Stack management - Ingest node pipelines

yml here:

#################### Packetbeat Configuration Example ######################### 

# This file is an example configuration file highlighting only the most common
# options. The packetbeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/packetbeat/index.html

# =============================== Network device ===============================

# Select the network interface to sniff the data. On Linux, you can use the
# "any" keyword to sniff on all connected interfaces.
packetbeat.interfaces.device: 0


# The network CIDR blocks that are considered "internal" networks for
# the purpose of network perimeter boundary classification. The valid
# values for internal_networks are the same as those that can be used
# with processor network conditions.
#
# For a list of available values see:
# https://www.elastic.co/guide/en/beats/packetbeat/current/defining-processors.html#condition-network
packetbeat.interfaces.internal_networks:
  - private

# =================================== Flows ====================================

# Set `enabled: false` or comment out all options to disable flows reporting.
packetbeat.flows:
  # Set network flow timeout. Flow is killed if no packet is received before being
  # timed out.
  timeout: 30s

  # Configure reporting period. If set to -1, only killed flows will be reported
  period: 10s

# =========================== Transaction protocols ============================

packetbeat.protocols:
- type: icmp
  # Enable ICMPv4 and ICMPv6 monitoring. The default is true.
  enabled: true

- type: amqp
  # Configure the ports where to listen for AMQP traffic. You can disable
  # the AMQP protocol by commenting out the list of ports.
  ports: [5672]

- type: cassandra
  # Configure the ports where to listen for Cassandra traffic. You can disable
  # the Cassandra protocol by commenting out the list of ports.
  ports: [9042]

- type: dhcpv4
  # Configure the DHCP for IPv4 ports.
  ports: [67, 68]

- type: dns
  # Configure the ports where to listen for DNS traffic. You can disable
  # the DNS protocol by commenting out the list of ports.
  ports: [53]

- type: http
  # Configure the ports where to listen for HTTP traffic. You can disable
  # the HTTP protocol by commenting out the list of ports.
  ports: [80, 8080, 8000, 5000, 8002]

- type: memcache
  # Configure the ports where to listen for memcache traffic. You can disable
  # the Memcache protocol by commenting out the list of ports.
  ports: [11211]

- type: mysql
  # Configure the ports where to listen for MySQL traffic. You can disable
  # the MySQL protocol by commenting out the list of ports.
  ports: [3306,3307]

- type: pgsql
  # Configure the ports where to listen for Pgsql traffic. You can disable
  # the Pgsql protocol by commenting out the list of ports.
  ports: [5432]

- type: redis
  # Configure the ports where to listen for Redis traffic. You can disable
  # the Redis protocol by commenting out the list of ports.
  ports: [6379]

- type: thrift
  # Configure the ports where to listen for Thrift-RPC traffic. You can disable
  # the Thrift-RPC protocol by commenting out the list of ports.
  ports: [9090]

- type: mongodb
  # Configure the ports where to listen for MongoDB traffic. You can disable
  # the MongoDB protocol by commenting out the list of ports.
  ports: [27017]

- type: nfs
  # Configure the ports where to listen for NFS traffic. You can disable
  # the NFS protocol by commenting out the list of ports.
  ports: [2049]

- type: tls
  # Configure the ports where to listen for TLS traffic. You can disable
  # the TLS protocol by commenting out the list of ports.
  ports:
    - 443   # HTTPS
    - 993   # IMAPS
    - 995   # POP3S
    - 5223  # XMPP over SSL
    - 8443
    - 8883  # Secure MQTT
    - 9243  # Elasticsearch

- type: sip
  # Configure the ports where to listen for SIP traffic. You can disable
  # the SIP protocol by commenting out the list of ports.
  ports: [5060]

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false

# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# A list of tags to include in every event. In the default configuration file
# the forwarded tag causes Packetbeat to not add any host fields. If you are
# monitoring a network tap or mirror port then add the forwarded tag.
#tags: [forwarded]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
#setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================
#cloud.auth: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
#cloud.id: "security-deployment:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"

# These settings simplify using Packetbeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  #hosts: ["localhost:9200"]

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"
  
  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

# ------------------------------ Logstash Output -------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["localhost:5044"]
  pipeline: geoip-info
  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================

processors:
  - # Add forwarded to tags when processing data from a network tap or mirror.
    if.contains.tags: forwarded
    then:
      - drop_fields:
          fields: [host]
    else:
      - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - detect_mime_type:
      field: http.request.body.content
      target: http.request.mime_type
  - detect_mime_type:
      field: http.response.body.content
      target: http.response.mime_type

# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]

# ============================= X-Pack Monitoring ==============================
# Packetbeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Packetbeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

# ============================== Instrumentation ===============================

# Instrumentation support for the packetbeat.
#instrumentation:
    # Set to true to enable instrumentation of packetbeat.
    #enabled: false

    # Environment in which packetbeat is running on (eg: staging, production, etc.)
    #environment: ""

    # APM Server hosts to report instrumentation results to.
    #hosts:
    #  - http://localhost:8200

    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.
    #api_key:

    # Secret token for the APM Server(s).
    #secret_token:


# ================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

Ahhh Ok you are sending through logstash ...
can you send direct to Elasticsearch First for debug.
Lets get it working that way first.

  1. Stop Everything
  2. Point packetbeat to elasticsearch, with the pipeline defined
  3. Cleanup all the indices
  4. Start Again
  5. Take a look

Yes I have tried both ways, I can try again. When I send directly to Elastic I change yml in the following way

  1. comment out logoutput
  2. add pipeline info to Elastic like this
# =============================== Elastic Cloud ================================
cloud.auth: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
cloud.id: "security-deployment:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
pipeline: geoip-info

Is this the right way?