Err: Error decoding JSON

I get this very often with Filebeat 5.1.1. I modified the logp call in json.go to log the input data. Two examples:

JSON in input file:
{"QuestionId":922254,"Username":"jtberry","Expiration":"2016-12-28T00:04:00","Source":"Distribute Has Copy Tools","Question":"Get Has Copy Tools from all machines with any Computer Name not matching ".*""}

Error in filebeat log:
2017-01-11T07:26:39-07:00 ERR Error decoding JSON: json: cannot unmarshal number into Go value of type map[string]interface {} 922254,"Username":"jtberry","Expiration":"2016-12-28T00:04:00","Source":"Distribute Has Copy Tools","Question":"Get Has Copy Tools from all machines with any Computer Name not matching ".*""}

JSON in input file:
{"QuestionId":922254,"Username":"jtberry","Expiration":"2016-12-28T00:04:00","Source":"Distribute Has Copy Tools","Question":"Get Has Copy Tools from all machines with any Computer Name not matching ".*""}

Error in filebeat log:
2017-01-11T07:31:44-07:00 ERR Error decoding JSON: invalid character 'i' in literal true (expecting 'r') tion":"Get Has Copy Tools from all machines with any Computer Name not matching ".*""}

In both cases, the JSON parser tried to parse partial lines - lines with the beginning few characters lopped off. It should be noted that we do not add to existing files, but replace the file with new data. We want all the lines logged to logstash.

Any help would be appreciated. I'm afraid I don't know enough about Filebeat to know what I should include here.

Thanks,
Dave

Can you add more details about how you do this? What does it? And what's your Filebeat config.

We have an application, Tanium, that writes a json file out to the directory once each day. The existing file is deleted, then the new file is created. There appear to be 1 or 2 seconds between these two events. The configuration is:

filebeat:
  prospectors:
    - input_type: log
      paths:
        - C:\Program Files\filebeat\FlashPlayerComputers_ToJSON.json
      document_type: beats

      tail_files: false
      close_removed: true
      clean_removed: true

      fields:
         property: infosec
         product: tanium_itsm
         tanium_type: inventory
      json.message_key: "Computer Name"

output.logstash:
  hosts: ["--logstash server--:51002"]
  ssl.certificate_authorities: ['C:\Program Files\filebeat\bundle.crt']
  ssl.certificate: 'C:\Program Files\filebeat\local.crt'
  ssl.key: 'C:\Program Files\filebeat\local.key'

With the clean_removed it fails one out of 5 or 6 times, but at least 2 out of 3 times without it.

Thank you.
Dave

If I understand you correctly, the error only happens on deletion of the file?

Without these three lines it happens every time the file is read. With them it happens 1 out of 5 times:

  tail_files: false
  close_removed: true
  clean_removed: true

Thanks,
Dave

@dallmon Sorry to ask again. By 1 out of 5 times you mean 1 out of 5 times when a log entry is read or when the file is deleted? I'm trying to figure out if it is related to the deletion of the file or not.

1 out of 5 times when the file is read there is a JSON parsing error. When we switch to tail mode, there are no problems parsing the JSON. That may be the solution.

Thanks,
Dave

Any chance you could share the log files? How are these logs written?

The file is one JSON object per line. It is written by the Tanium system so I don't have knowledge of exactly how it is written, but it looks like it deletes the file then creates the new file. The 2MB data files contain data that looks like:

{"Computer Name":"LMDV-VIRAJ.","Name":"Adobe Flash Player Install Manager","Version":"24.0.0.186","Uninstallable":"Not Uninstallable","Count":"1","Age":"604800"}
{"Computer Name":"LMCM-JYDAV.","Name":"Adobe Flash Player Install Manager","Version":"24.0.0.186","Uninstallable":"Not Uninstallable","Count":"1","Age":"604800"}
{"Computer Name":"LMCP-AXSIMPSON.","Name":"Adobe Flash Player Install Manager","Version":"23.0.0.205","Uninstallable":"Not Uninstallable","Count":"1","Age":"604800"}

There are no parsing errors when the file is tailed rather than replaced, but tailing introduces a whole new set of maintenance issues I would rather avoid.

Thanks,
Dave

I somehow have the suspicion that it could be related to how the file is written. Instead of tailing, could you rotate the file? I somehow suspect that the file is truncated in the middle of reading. If the file would be replaced, filebeat would probably keep it open. Is it a new file or is the same file reused?

New file same name. I believe, only because I've seen it happen in Explorer, that the file is deleted then rewritten.

Hm, if it is a new file with a new inode then this would not support my previous theory. Because filebeat will keep the old file open until it finished reading. Also as you are on Windows I would potentially expect filebeat to kind of block the creation of the new file with the same name before the old one is completely removed. Can you verify that it is a new file and not a truncated one?

I can't verify that it is a new file. The vendor says it is a known issue and they have a fix that will allow it to work with Filebeat. It will be available Thursday. I'll see what happens then and update this thread. I'll try to find out exactly what they changed.

Interesting. Can I ask who the vendor is? Keep me posted.

The vendor is Tanium.

The solution:

Upgrade to 5.2.1 Filebeat
configuration:
close_eof: true
ignore_older: 60s
clean_inactive: 120s

In two instances I had to change the times to 90 and 125 respectively.

Thanks all for your help.
Dave

Glad to hear you got it working. Did also Tanium change something on their side?

Tanium changed something, but by itself their change made no difference at all in the behavior.

Thanks,
Dave

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.