Unable to find source.geo.location filebeat aws module

I would like to convert the following to a geo_point type

           "location": {
              "properties": {
                "lat": {
                  "type": "float"
                },
                "lon": {
                  "type": "float"
                }
              }
            }

The issue that I am running into is that the field source.geo.location is not of type geo_point. I would like to modify the location mapping to look like this

             "location": {
              "type": "geo_point"
              "properties": {
                "lat": {
                  "type": "float"
                },
                "lon": {
                  "type": "float"
                }
              }

I have reviewed this issue here: Unable to find source.geo.location in index pattern logstash-*, but it doesn't answer my question. I am using the preconfigured elastic common schema from the aws module. This particular issue is related to the vpc flow logs visualization. I have attached a screen shot for extra clarity.


My attempt at fixing the problem

PUT /vpc-7.10.2-2021.03.05/_mapping
{
  "mappings" : {
    "properties": {
      "source.geo.location": {
        "type": "geo_point"
      }
    }
  }
}

this just produced

{
  "error" : {
    "root_cause" : [
      {
        "type" : "mapper_parsing_exception",
        "reason" : "Root mapping definition has unsupported parameters:  [mappings : {properties={source.geo.location={type=geo_point}}}]"
      }
    ],
    "type" : "mapper_parsing_exception",
    "reason" : "Root mapping definition has unsupported parameters:  [mappings : {properties={source.geo.location={type=geo_point}}}]"
  },
  "status" : 400
}

This doesn't work either

PUT /vpc-7.10.2-2021.03.05/_mapping
{
  "mappings" : {
    "properties": {
      "location": {
        "type": "geo_point"
      }
    }
  }
}

This does not help either

PUT /vpc-7.10.2-2021.03.05/_mapping
{
    "properties": {
      "location": {
        "type": "geo_point"
      }
    }
}

This one doesn't do it either

PUT /vpc-*/_mapping
{
  "source": {
    "geo":{
      "location":{
        "properties":{
          "type": "geo_point"
        }
      }
    }
  }
}

You were close... :slight_smile:

first when you use the _mapping endpoint... you must PUT (create) the index first then apply the mapping.

This syntax does both creates the index and applies the mapping, of course you will end up putting this in a _template at some point

PUT /vpc-7.10.2-2021.03.05/
{
  "mappings": {
    "properties": {
      "source": {
        "properties": {
          "geo": {
            "properties": {
              "location": {
                "type": "geo_point"
              }
            }
          }
        }
      }
    }
  }
}

your shorthand will work... but the syntax above is "more correct / descriptive" it will be what show when you do a GET on the index

PUT /vpc-7.10.2-2021.03.05/
{
  "mappings" : {
    "properties": {
      "source.geo.location": {
        "type": "geo_point"
      }
    }
  }
}

This will not automatically fix your issue you will need to map... I am not sure of the issue...

First is the data in the actual source

2nd if the flow logs are only internal IPs then there will be no source.geo.location data.

Thank you for the response. I am still facing an issue updating the index. Will this request create a new mapping? I only ask because the index already exists. This is the error that I receive

{
  "error" : {
    "root_cause" : [
      {
        "type" : "resource_already_exists_exception",
        "reason" : "index [vpc-7.10.2-2021.03.05/638qM2S5ToqEuEX0PGCCLA] already exists",
        "index_uuid" : "638qM2S5ToqEuEX0PGCCLA",
        "index" : "vpc-7.10.2-2021.03.05"
      }
    ],
    "type" : "resource_already_exists_exception",
    "reason" : "index [vpc-7.10.2-2021.03.05/638qM2S5ToqEuEX0PGCCLA] already exists",
    "index_uuid" : "638qM2S5ToqEuEX0PGCCLA",
    "index" : "vpc-7.10.2-2021.03.05"
  },
  "status" : 400
}

You can not update / change the mapping of existing fields in an existing index.

You can only add new fields or create the mapping in a new index and reindex that data into that.

OR fix the template and you ingest so your data into the correct index and mapping in the first place.

There is no way to just fix / change that mapping and fields in an existing index / documents.

1 Like

@EvanGertis

Just a little detail / subtlety on that error

Because you tried the all in one syntax

PUT /vpc-7.10.2-2021.03.05/
{
  "mappings" : {

You are actually trying to create the index with the same name... that is the error above.

"type" : "resource_already_exists_exception",

If you used the explicit mapping syntax

PUT /vpc-7.10.2-2021.03.05/_mapping
{
    "properties": {
      "source": {
        "properties": {
...

You would get an mapping exception like

{
  "error" : {
    "root_cause" : [
      {
        "type" : "illegal_argument_exception",
        "reason" : "mapper [source.geo.location] cannot be changed from type [float] to [geo_point]"
      }
    ],
    "type" : "illegal_argument_exception",
    "reason" : "mapper [source.geo.location] cannot be changed from type [float] to [geo_point]"
  },
  "status" : 400
}

@stephenb is there an issue with using multiple indicies for a filebeat configuration like

indices:
  - index: "cloudtrail-%{[agent.version]}-%{+yyyy.MM.dd}"
    when.contains:
      event.dataset: "aws.cloudtrail"
  - index: "elb-%{[agent.version]}-%{+yyyy.MM.dd}"
    when.contains:
      event.dataset: "aws.elb"
  - index: "vpc-%{[agent.version]}-%{+yyyy.MM.dd}"
    when.contains:
      event.dataset: "aws.vpc"

Even after I add the field I'm still running into issues with missing field for elb-* and vpc-*. I thought that the fields would be generated through ECS by default?

Hi @EvanGertis

Apologies... I think I have lost track of the overall goal.

Are you using the AWS Module? Do you want to use the module?

It seems that you want to use the module but customize all the indices etc... which may results in a number of unintended consequences.

There are a number of moving parts of a modules (e.g. AWS module) which may include : inputs, outputs, templates(mappins), index names, ingest pipelines, dashboards, visualizations, Index Lifecycle Management etc... all need to be aligned to get the desired results.

This is why they are modules :slight_smile:

Once you start changing things the proper template may (will) not be use (probably your initial issues, an ongoing issue) and then pipeline (may or may not be properly applied

That is not to say you can not... it just you need to understand all the parts and relationships

Generally I suggest users use all the defaults to start... get that all working and then start to customize unless you already understand all the components and pieces and their relationships.

i.e.
(Cleanup if need be)
Just configure the modules endpoint the aws endpoints / creds perhaps for a single type
Configure the output default.

filebeat setup -e
then run
filebeat -e
and see if it all works.

Then if you want to change all the indices name etc, you will need to modify the template so it gets applied to the new indices name, then make sure the pipelines are getting called

All that said, I am unclear what is missing what is the problem? Is it not sorting to the new index names? and Yes if you are sending to new indices without setting up the correct templates and pipelines the data will not be parsed and created with the correct fields / the way you want.

In a meta sense you would need to add your index patterns to the filebeat template and then make sure manage_template : false from then on or it will get over written... this is just an example but that is an approach.