Problem caused by change of fieldtype after upgrade of filebeat

Hi!
We recently updated our filebeat-shippers from 5.2 to 6.2 and one of our dev teams noticed that our Grok filters in our logstash indexers weren't applied to their logs.
Efter some troubleshooting we noticed that the field type of the accoundID we send have changed from string to int, this caused the our filters to not match since their condition are matching strings to determine which filters to use.

The only change that was made were the ones necessary for the upgrade, see part of our current config file below:

#================================ General =====================================
filebeat.idle_timeout: 1s
filebeat.shutdown_timeout: 5s

fields_under_root: true
fields:
  accountId: ${ACCOUNTID}
  instanceId: ${INSTANCEID}
  instanceName: ${INSTANCENAME}
  region: ${REGION}
  az: ${AZ}
  environment: ${ENV}
logging.metrics.enabled: true
logging.metrics.period: 60s

Have there been a change made to automatically convert fields to specific fieldtypes?

Best Regards
/Viktor

Please format logs, configs and terminal input/output using the </>-Button or markdown code fences. This forum uses Markdown to format posts. Without proper formatting, it can be very hard to read your posts.

Config files using YAML are sensitive to formatting and indentation. Without proper formatting it is difficult to spot any errors in your configs.

Is accountID a number?

Have you checked the index it's mapping? I wonder you have had a custom template mapping in 5.2, but didn't just for it when upgrading.

As these fields are put in the config, you can try to enforce a string by quoting it like:

fields:
  accountID: '${ACCOUNTID}'

This will enforce the type to be a string while parsing the config file.

But be aware, the index mapping in ES is already using an int for todays index. You must not change the mapping type of an existing index. Either you can drop the index and resend everything, or you use the Reindex API to update your mapping. When reindexing, you will have to prepare a new index with updated index mapping (download old one, change field to use string and upload mapping for use with new index), reindex and replace the old index afterwards. Have a look at this document if you want to reindex.

Hi Steffen!
I've formated the config now.
The accountID is numeric but set as a env variable (string) passed to the container which runs filebeat.
Before the upgrade is was still interpreting it as a string and after the update without changing anything regarding the config of that field filebeat interprets it as a integer.

And since our index is full of entrys with that value as a string and the new entrys as integers the indexing isn't working properly.

I would guess this can be solved by changing that field to be an integer since it is an numeric value, but that will affect all the exisiting logs so that is quite an effort and nothing we can do short term.

Can't find anything in your releasenotes that points to this type of breaking change i.e. auto interpreting strings as other fieldtypes.

Would you suggest the way to go is to force it to be a string until we can change our entire logging stack?

Best Regards /Viktor

I'm not aware of changes in parsing fields since 5.2, but there might have been a changes on interpreting environment variables. Assuming ACCOUNTID=12345, I wonder what happens if you would set ACCOUNTID='"12345"'. This enforces the number to be passed with double quotes to beats and beats should treat it as a string value then (double quotes should be removed).

By updating the template mapping type to text or keyword in Elasticsearch, you can enforce a type. Elasticsearch infers the actual type if a key is missing the the template mapping.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.