Getting intermittent parsing of JSON logs in Kibana

Hi,

I'm trying to do a shift of the current log setup we have in place from text to JSON so that we can accommodate MDC and get more info from our logs.

The current setup is pretty basic with no real changes from the standard setup: Filebeat gets the logs and ships them to logstash.
I changed the logger in our code, used the logstash.LogstashEncoder for the logback.RollingFileAppender

I went in, added the JSON log-related properties to filebeat.yml and am seeing some logs (the live check ones) reaching Kibana however none of the rest are present(not to mention ERROR logs).

filebeat.inputs:
- type: log
  enable: "true"
  paths:
    - /var/log/security_software.log
    - /var/log/bsmx_logs/**/*.log
  json.message_key: msg
  json.keys_under_root: true
  json.add_error_key: true
  fields_under_root: true
  multiline.pattern: '^[:20:]'
  multiline.negate: true
  multiline.match: after


filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 3
setup.kibana:
output.logstash:
  hosts: ["10.1.2.12:5044"]
processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~

I didn't do any changes to the logstash setup. Could that be the cause?
Any help is much welcome!

LE: I did a bit of additional investigating and it seems that removing the multiline settings resolves my problems.
How could I setup different processing based on the type of the log line(text/JSON)?
I dug around and saw that setting up a processor could work but which one would work for me?

Hey @VictorS, welcome to discuss :slight_smile:

From what I understand from your updates, you don't have parsing errors anymore after removing the multiline options, is that right?

Do you have mixed text and JSON logs in the same files? If not, you can have different inputs configured with different options depending on the format.

Hi @jsoriano,

thank you very much for your reply! Yes, I got over the parsing errors by removing the multiline options but your assumption is correct. I do have text&json files in the same path as I am slowly transitioning from json to text. How should I go about adding multiline support to still be able to handle text stack traces?

For those opening this thread and wondering about my solution, I added the last processor(while removing the json fields from the inputs section):

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
  - decode_json_fields:
      when:
        regexp:
          message: "^{"
      fields: ["message"]
      target: ""
      overwrite_keys: true

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.