Could not locate that index-pattern-field (id: source.geo.location)

I'm using Filebeat+Kibana+Elasticsearch 7.0.0.
I have one filebeat agent and one elasticsearch node.
I've activated the nginx module in filebeat and successfully added a geoip pipeline in Elasticsearch. When I open a random Nginx access log entry, I get populated values in fields such as:

However, when I try to visualize default Kibana dashboards like [Filebeat Nginx] Overview ECS i get stuck with messages like Could not locate that index-pattern-field (id: source.geo.location) and Saved "field" parameter is now invalid. Please select a new field.

I've tried running filebeat setup -e on the following filebeat.yaml:

    path: ${path.config}/modules.d/*.yml
    reload.enabled: true

- module: nginx

    - type: docker
      hints.enabled: true
      default.disable: true

  hosts: 'elasticsearch:9200'
  pipeline: geoip-info

setup.template.overwrite: true

  host: "kibana:5601"

The geoip-info pipeline is PUT into elasticsearch via the Kibana console:

PUT _ingest/pipeline/geoip-info
  "description": "Add geoip info",
  "processors": [
      "geoip": {
        "field": "source.ip",
        "target_field": "source.geo",
        "ignore_missing": true

The field source.geo.location exists in fields.yaml, and as stated earlier, log results are corretcly resolved geo-wise.

Please advice how I can have the maps show up correcly in Kibana with IPs plotted.

PS! If I ignore the added pipeline in the filebeat.yaml and rely on the geoip processor in the default nginx access ingest, it behaves the exact same way. Not working in Kibana.

@thomasneirynck - can you please shed more light here?


hi @remimikalsen

this might be more of a filebeat question which is not my area of expertise.

but can you verify two things:

  • Does the mapping of your index have a field of type geo_point with name source.geo.location?
  • check an example document in Discover. Does it correctly show a property at that field with latand lon properties?

If that is the case, Kibana should be able to create a map (e.g. coordinate map or a map in the Maps-app).

Thank you for the follow-up @thomasneirynck!

So answering your two questions:

  1. When I run "GET _mapping" I fond the following definition of location under source.geo:

         "location" : {
               "properties" : {
                 "lat" : {
                   "type" : "float"
                 "lon" : {
                   "type" : "float"

Thus, no geo_point type, as expected, as my location geo field is split into two sub-fields, lat and lon, when I look at random nginx log entries.

  1. This is a Json example of my source-info from Discover:

     "source": {
       "geo": {
         "continent_name": "North America",
         "region_iso_code": "US-TX",
         "city_name": "Dallas",
         "country_iso_code": "US",
         "region_name": "Texas",
         "location": {
           "lon": -96.8028,
           "lat": 32.7791
       "address": "",
       "ip": ""

I assume this means I should re-define my location field to be of type geo_point. How would I do that? I already tried this:

PUT filebeat-7.0.0
  "mappings": {
    "properties": {
      "location": {
        "type": "geo_point"

This gave me a "resource_already_exists_exception".

Also what confuses me is that the "location" field is defined the following way in my /usr/share/filebeat/fields.yml file when running a container with the docker image

- name: geo.location
  level: core
  type: geo_point
  description: Longitude and latitude.
  example: '{ "lon": -73.614830, "lat": 45.505918 }'

I would really appreciate some more input here!

Hi, I just made this work. I don' know how or why, but somehow my indexing templates were off. For future reference, I fixed it by:

  1. Stopping filebeat (and for others, anything that might be causing writes to the Elasticsearch index)
  2. Deleting all templates and indexed in Elasticsearch.
  3. Restarting Kibana for good measure, as Kibana seems to dislike that I delete all Elasticsearch data.
  4. Still with my filebeat container shut down, I ran the following one off commands:
  • docker-compose run filebeat setup --template
  • docker-compose run filebeat setup -e

Then, after starting up filebeat, the maps and dashboards in Kibana worked as expected.

No further arguments were needed as my docker-compose.yml mounts a valid filebeat.yaml config which already contains my kibana and elasticsearch targets.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.