Filebeat: failed to parse field [user_agent.version] of type [date]

This begins as a filebeat issue but I think it's now a matter of elasticsearch index.

I'm seeing repeated messages like this in our logging. I can see this is related to the nginx module but I'm unsure how to go about fixing it (custom mapping, enable dynamic mapping, edit module, ?).

I know it's related to the nginx filebeat module. This entry seems to be key:

{"type":"mapper_parsing_exception","reason":"failed to parse field [user_agent.version] of type [date] in document

full log entry:

Mar  6 07:22:27 rgo032 filebeat[4914]: 2023-03-06T07:22:27.103-0800#011WARN#011[elasticsearch]#011elasticsearch/client.go:414#011Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Date(2023, time.March, 6, 7, 22, 26, 929462784, time.Local), Meta:{"pipeline":"filebeat-7.17.8-nginx-access-pipeline"}, Fields:{"agent":{"ephemeral_id":"2be6b958-12f3-4e22-ad23-d5a9021edd15","hostname":"rgo032","id":"3b97f44c-65e6-4d28-ab3f-a4e80ff361fb","name":"rgo032","type":"filebeat","version":"7.17.8"},"ecs":{"version":"1.12.0"},"event":{"dataset":"nginx.access","module":"nginx","timezone":"-08:00"},"fileset":{"name":"access"},"host":{"name":"rgo032"},"input":{"type":"log"},"log":{"fi
    le":{"path":"/var/log/nginx/access.log"},"offset":2643238},"message":"192.168.1.15 - stevans [06/Mar/2023:07:22:26 -0800] \"PROPFIND /remote.php/dav/files/stevans/ HTTP/1.1\" 207 331 \"-\" \"Mozilla/5.0 (Macintosh) mirall/2.10.0 (build 6519) (ownCloud, osx-21.1.0 ClientArchitecture: x86_64 OsArchitecture: x86_64)\"","service":{"type":"nginx"}}, Private:file.State{Id:"native::12845627-64769", PrevId:"", Finished:false, Fileinfo:(*os.fileStat)(0xc000a14a90), Source:"/var/log/nginx/access.log", Offset:2643473, Timestamp:time.Date(2023, time.March, 6, 0, 0, 3, 582773674, time.Local), TTL:-1, Type:"log", Meta:map[string]string(nil), FileStateOS:file.StateOS{Inode:0xc4023b, Device:0xfd01}, IdentifierName:"native"}, TimeSeries:false}, Flags:0x1, Cache:publisher.EventCache{m:common.MapStr(nil)}} (status=400): {"type":"mapper_parsing_exception","reason":"failed to parse field [user_agent.version] of type [date] in document with id 'GmmEt4YBeNusebpST5KS'. Preview of field's value: '2.10.0'","caused_by":{"type":"illegal_argument_exception","reason":"failed to parse date field [2.10.0] with format [strict_date_optional_time||epoch_millis]","caused_by":{"type":"date_time_parse_exception","reason":"Failed to parse with all enclosed parsers"}}}, dropping event!

/var/log/nginx/access.log:

    192.168.1.15 - stevans [06/Mar/2023:07:21:24 -0800] "PROPFIND /remote.php/dav/files/stevans/ HTTP/1.1" 207 331 "-" "Mozilla/5.0 (Macintosh) mirall/2.10.0 (build 6519) (ownCloud, osx-21.1.0 ClientArchitecture: x86_64 OsArchitecture: x86_64)"

I was able to do some more investigation here and found a difference between our two indices:

GET filebeat-7.17.8/_mapping/field/user_agent.version
...
  "filebeat-7.17.8" : {
    "mappings" : {
      "user_agent.version" : {
        "full_name" : "user_agent.version",
        "mapping" : {
          "version" : {
            "type" : "date"
...
GET filebeat-7.17.9/_mapping/field/user_agent.version
...
  "filebeat-7.17.9" : {
    "mappings" : {
      "user_agent.version" : {
        "full_name" : "user_agent.version",
        "mapping" : {
          "version" : {
            "type" : "text",
            "fields" : {
              "keyword" : {
                "type" : "keyword",
                "ignore_above" : 256
...

If I understand this correctly, the filebeat-7.17.8 index is incorrect as the "type" : "date" doesn't seem right compared to the filebeat-7.17.9 index. Did this happen because of a few funky log entries?

Is the only way to fix this is reindexing or dropping the index?

Hi @mevan

Yes... probably ... turns out user_agent.version is not a "defined / mapped" field you could add it to the fields.yml file in the filebeat as a keyword and then run setup to avoid it in the future.

Yes... no fixing / changing a mapping on an existing index.

Hello and thanks.

I found the file /etc/filebeat/fields.yml and the contents contain:

- key: nginx
  title: "Nginx"
  description: >
    Module for parsing the Nginx log files.
  ...
              - name: user_agent
              type: group
              fields:
                - name: device
                  type: alias
                  path: user_agent.device.name
                  migration: true
                - name: name
                  type: alias
                  path: user_agent.name
                  migration: true
                - name: os
                  type: alias
                  path: user_agent.os.full_name
                  migration: true
                - name: os_name
                  type: alias
                  path: user_agent.os.name
                  migration: true
                - name: original
                  type: alias
                  path: user_agent.original
                  migration: true

Would I add something like this after -name: original:

            - name: original
              type: alias
              path: user_agent.original
              migration: true

            - name: version
              type: keyword
              path: user_agent.version
              migration: true

LGTM! give it a try ... then run setup and take a look at the template (you might need to delete the templates to force overwrite

I ran this and got an error. My guess is I'd need to set setup.ilm.overwrite: true in /etc/elasticsearch/elasticsearch.yml and restart for this to take?

cd /etc/filebeat
# filebeat setup
Overwriting ILM policy is disabled. Set `setup.ilm.overwrite: true` for enabling.

Index setup finished.
Loading dashboards (Kibana must be running and reachable)
Exiting: Error importing Kibana dashboards: fail to import the dashboards in Kibana: Error importing directory /usr/share/filebeat/kibana: failed to import Kibana index pattern: 1 error: error loading index pattern: returned 403 to import file: Unable to bulk_create index-pattern: <nil>. Response: {"statusCode":403,"error":"Forbidden","message":"Unable to bulk_create index-pattern"}

I put setup.ilm.overwrite: true in /etc/filebeat/filebeat.yml and then ran filebeat setup -e. I'm getting a different error now...


2023-03-06T14:45:05.077-0800	INFO	template/load.go:123	Template with name "filebeat-7.17.8" loaded.
2023-03-06T14:45:05.077-0800	INFO	[index-management]	idxmgmt/std.go:297	Loaded index template.
2023-03-06T14:45:05.077-0800	INFO	[index-management.ilm]	ilm/std.go:108	Index alias is not checked as setup.ilm.check_exists is disabled
Index setup finished.
Loading dashboards (Kibana must be running and reachable)
2023-03-06T14:45:05.078-0800	INFO	kibana/client.go:180	Kibana url: http://localhost:5601
2023-03-06T14:45:06.460-0800	INFO	kibana/client.go:180	Kibana url: http://localhost:5601
2023-03-06T14:45:06.528-0800	ERROR	instance/beat.go:1026	Exiting: Error importing Kibana dashboards: fail to import the dashboards in Kibana: Error importing directory /usr/share/filebeat/kibana: failed to import Kibana index pattern: 1 error: error loading index pattern: returned 403 to import file: Unable to bulk_create index-pattern: <nil>. Response: {"statusCode":403,"error":"Forbidden","message":"Unable to bulk_create index-pattern"}
Exiting: Error importing Kibana dashboards: fail to import the dashboards in Kibana: Error importing directory /usr/share/filebeat/kibana: failed to import Kibana index pattern: 1 error: error loading index pattern: returned 403 to import file: Unable to bulk_create index-pattern: <nil>. Response: {"statusCode":403,"error":"Forbidden","message":"Unable to bulk_create index-pattern"}

These are the privileges given to the filebeat_writer role. I'm assuming this is the user/role invoked with setup

And the full output from filebeat setup -e

The setup Role is different... I did not understand that you did not have a setup role, the writer roles does not have the needed to do all the kibana stuff but that is really not needed.

Loading index pattern does not really matter...

2023-03-06T14:45:05.077-0800	INFO	template/load.go:123	Template with name "filebeat-7.17.8" loaded.
2023-03-06T14:45:05.077-0800	INFO	[index-management]	idxmgmt/std.go:297	Loaded index template.
2023-03-06T14:45:05.077-0800	INFO	[index-management.ilm]	ilm/std.go:108	Index alias is not checked as setup.ilm.check_exists is disabled
Index setup finished. 

Loaded index template. <<< This is what is important.

That should be enough did you check the loaded template and see if the new field is there?

Do I do that the same way I showed above? Like this:

GET filebeat-7.17.8/_mapping/field/user_agent.version
GET filebeat-7.17.9/_mapping/field/user_agent.version

They are still showing the same thing.

Ohh darn I did not look at the fields closely I do not think that will work ... let me look and get back with a better answer.

Ok user_agent.version is already a defined field / mapping so no need to edit the fields.

What this tells me is that you have not been running filebeat setup -e which sets up the templates and that is most likely your issue.

To see the current mapping look at
curl localhost:9200/_index_template/filebeat-7.17.3

It easier is to use jq

curl localhost:9200/_index_template/filebeat-7.17.3 | jq '[.index_templates[].index_template.template.mappings.properties.user_agent]'
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  343k  100  343k    0     0  10.7M      0 --:--:-- --:--:-- --:--:-- 13.9M
[
  {
    "properties": {
      "original": {
        "ignore_above": 1024,
        "type": "keyword",
        "fields": {
          "text": {
            "type": "match_only_text"
          }
        }
      },
      "os": {
        "properties": {
          "full_name": {
            "ignore_above": 1024,
            "type": "keyword"
          },
          "kernel": {
            "ignore_above": 1024,
            "type": "keyword"
          },
          "name": {
            "ignore_above": 1024,
            "type": "keyword",
            "fields": {
              "text": {
                "type": "match_only_text"
              }
            }
          },
          "type": {
            "ignore_above": 1024,
            "type": "keyword"
          },
          "family": {
            "ignore_above": 1024,
            "type": "keyword"
          },
          "version": {
            "ignore_above": 1024,
            "type": "keyword"
          },
          "platform": {
            "ignore_above": 1024,
            "type": "keyword"
          },
          "full": {
            "ignore_above": 1024,
            "type": "keyword",
            "fields": {
              "text": {
                "type": "match_only_text"
              }
            }
          }
        }
      },
      "name": {
        "ignore_above": 1024,
        "type": "keyword"
      },
      "device": {
        "properties": {
          "name": {
            "ignore_above": 1024,
            "type": "keyword"
          }
        }
      },
      "version": {
        "ignore_above": 1024,
        "type": "keyword"
      }
    }
  }
]

I think you are not running setup, did you run setup before?

so now to get a new index with the right mapping after you see the right mapping is

POST filebeat-7.17.9/_rollover

Then try

$ curl localhost:9200/filebeat-7.17.9/_mapping/field/user_agent.version | jq  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   169  100   169    0     0  18049      0 --:--:-- --:--:-- --:--:-- 84500
{
  "filebeat-7.17.9-2023.03.06-000001": {
    "mappings": {
      "user_agent.version": {
        "full_name": "user_agent.version",
        "mapping": {
          "version": {
            "type": "keyword",
            "ignore_above": 1024
          }
        }
      }
    }
  }
}

Thank you for all the help here. I think we have come full circle to what I noted when I opened this issue (see above). That is, I have two indices filebeat-7.17.8 and filebeat-7.17.9. Below is what I'm seeing for the mappings. I have not applied any POST or PUT against either index. In fact, the only potentially invasive thing I've done since working on this is running filebeat setup as noted earlier in this discussion.

You asked:

did you run setup before?

...and I'm unsure. I'm looking at some notes from a previous engineer and I see this was installed via Ansible and it's noted "Filebeat can create the index." Until I have time to look at the Ansible code I'll assume filebeat setup may not have occurred when this elasticsearch became operational.

Does this issue reflect the lack of loading the "recommended index template?"

As this has been very time consuming and complex, all for remedying what may have been an improper installation, I'm considering dropping the indices and starting over. I can't find a document that guides this but I'm assuming it could be accomplished by just deleting each index. Do you have anything to recommend for this course? Would you advise against it?


# curl -s --user elastic:foobar https://elasticsearch.vvv.io:9200/filebeat-7.17.8/_mapping/field/user_agent.version | jq
{
  "filebeat-7.17.8": {
    "mappings": {
      "user_agent.version": {
        "full_name": "user_agent.version",
        "mapping": {
          "version": {
            "type": "date"
          }
        }
      }
    }
  }
}
# curl -s --user elastic:foobar https://elasticsearch.vvv.io:9200/filebeat-7.17.9/_mapping/field/user_agent.version | jq
{
  "filebeat-7.17.9": {
    "mappings": {
      "user_agent.version": {
        "full_name": "user_agent.version",
        "mapping": {
          "version": {
            "type": "text",
            "fields": {
              "keyword": {
                "type": "keyword",
                "ignore_above": 256
              }
            }
          }
        }
      }
    }

Hi @mevan

Apologies, but I do not know how you got in this configuration in the first place.

So yes, if the data is not important I would delete both of those indices and start over.

Neither of those mappings are correct.

Also, are you using the nginx module? Hopefully you are.

I would clean up those indices and then I would follow the quick start guide and follow those steps.

That probably would have helped you get started in the correct way in the first place.

That quick start guide example even uses the nginx module

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.