Trouble with log in UCS-2 LE BOM encoding

I'm new to filebeat. I have a log file in UCS-2 LE BOM encoding. Log entries are each their own valid JSON object, one per line.

Filebeat does not seem to like the UCS-2 LE BOM encoding. I'm seeing the following in the filebeat output:

    "json": {
"error": {
  "message": "Error decoding JSON: invalid character '\\x00' looking for beginning of object key string",
  "type": "json"
}
},
"message": "{\u0000\"\u0000I\u0000D\u0000\"\u0000:\u00006\u00002\u0000,\u0000\"\u0000T\u0000r\u0000a\u0000n\u0000s\....(a lot more of this)

In the filebeat.yml config file, I've tried all of the following. The result is the same with all of them.

encoding: utf-16
encoding: utf-16-bom
encoding: utf-16le-bom

If I use Notepad++ and convert the file to UTF-8 encoding, then filebeat has no trouble reading the file with this config setting:

encoding: utf-8

However, our logging process that writes to the log file then chokes after the conversion to UTF-8.

This is filebeat 7.3.0 on Windows Server 2016 reading a local log file. Any help would be much appreciated.

I was able to work with one of the developers so they could write logs successfully to the UTF-8 encoded version of the log file, which filebeat successfully reads.

I was never able to solve the issue with filebeat successfully reading the log file in UCS-2 LE BOM encoding.

For anyone hitting the same issue, the problem is probably caused by not adding the encoding under an input section. It needs to look like this to work:

- module: mssql
  # Fileset for native deployment
  log:
    enabled: true
    var.paths: ["Path to Server Logs"]
    input:
        encoding: utf-16le-bom