Resolving mapping parser errors

I'm using filebeat -> pipeline -> Elasticsearch, so at the moment everything is going to one index.

at some point this field structure was inferred from my data

                "name": {
                  "properties": {
                    "first_name": {
                      "type": "keyword",
                      "ignore_above": 1024
                    },
                    "last_name": {
                      "type": "keyword",
                      "ignore_above": 1024
                    },
                    "middle_name": {
                      "type": "keyword",
                      "ignore_above": 1024
                    }
                  }
                },

however, I'm also getting name records that are just strings and not objects:

{"type":"mapper_parsing_exception","reason":"object mapping for [app.name] tried to parse field [name] as object, but found a concrete value"}, dropping event!

How can I resolve?

I tried calling

GET filebeat-7.17.3/_mapping

and as a test writing back the output

PUT filebeat-7.17.3/_mapping
{
    "mappings": {
      "_meta": {
        "beat": "filebeat",
        "version": "7.17.3"
      },
      "dynamic_templates": [
        {
          "labels": {
            "path_match": "labels.*",
            "match_mapping_type": "string",
            "mapping": {
              "type": "keyword"
            }
          }
        },
...

but immediately get an error

{
  "error": {
    "root_cause": [
      {
        "type": "mapper_parsing_exception",
        "reason": "Root mapping definition has unsupported parameters:  [mappings : {_meta={beat=filebeat, version=7.17.3},

(1) Is there an easy way to fix a single typing error like app.name
(2) Is there a way to get the file mapping from GET, modify it and resupply it to PUT?

Thanks!

You cannot modify existing mappings, only templates. If you want to change the mapping then your best option is to;

  1. update the template
  2. wait for an index rollover, where it will use the new mapping
  3. reindex the old data so it uses the right mapping

Part of your issue here though is that you have some data that is coming in with name.first_name, name.last_name etc, so you need to factor that in.

Thanks Mark,

It seems like there isn't a good solution to having data with multiple types, like

name: "Keanu Reeves"

and

name: {
  first: "Keanu",
  last: "Reeves"
}

?

We are looking to use Elasticsearch for general purpose debugging logging - application authors are capable of creating new logs with competing types, and rather than failing to log we'd prefer to accept multiple types.

Is there some best practice to handle this case? E.g., is there a way through filebeat / elasticsearch pipelines to automatically correct an arbitrary field to something like

field -> field_string
field -> field_object
filed -> field_number

?

Thanks!

There's not, no. Your best option would be to flatten it during ingestion.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.