Filebeat logstash module not working with json processor

Hi,

I am trying to send logs with Filebeat 7.5.1 from an EC2 Ubuntu instance to cloud Elasticsearch.

The log file is /var/log/logback/oauth-service.log, in the following logstash format:

{"@timestamp":"2020-01-02T16:05:16.692+01:00","@version":"1","message":"Running with Spring Boot v2.2.2.RELEASE, Spring v5.2.2.RELEASE","logger_name":"com.moovimento.oauth.Application","thread_name":"main","level":"DEBUG","level_value":10000}
{"@timestamp":"2020-01-02T16:05:16.693+01:00","@version":"1","message":"The following profiles are active: dev","logger_name":"com.moovimento.oauth.Application","thread_name":"main","level":"INFO","level_value":20000}
{"@timestamp":"2020-01-02T16:05:17.400+01:00","@version":"1","message":"Bootstrapping Spring Data MongoDB repositories in DEFAULT mode.","logger_name":"org.springframework.data.repository.config.RepositoryConfigurationDelegate","thread_name":"main","level":"INFO","level_value":20000}

My /etc/filebeat/filebeat.yml is as follows:

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/*.log
  fields:
    level: debug

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false

setup.template.settings:
  index.number_of_shards: 1

cloud.id: "<name>:<key>"
cloud.auth: "elastic:<password>"

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

logging.level: debug

My /etc/filebeat/modules.d/logstash.yml is as follows

- module: logstash
  log:
    enabled: true
    var.paths: ["/var/log/logback/*"]
    var.format: json
  slowlog:
    enabled: false

I setup the pipeline with

filebeat modules enable logstash
filebeat setup -e
service filebeat restart

However, for some reason I can's see any logs on Kibana.

If I omit var.format: json in logstash.yml, I can see the logs on Kibana but not properly formatted, since the grok processor expects plain format, i.e. the message field contains the whole escaped json logs.

logstash is actually not running on the machine, my application is directly logging into /var/log/logback/oauth-service.log using logback.

Note: I can ingest nginx access logs and see them on kibana with no problems using the nginx Filebeat module.

Hmm, it is strange that the format line alone would make the difference. Can you check the logs for Filebeat itself? Since you are writing "logstash-format" logs rather than using logstash itself, my first suspicion is that the logstash module is making some assumption that fails on your data (e.g. a missing field, a field of the wrong type, etc.) which is preventing ingestion. It's hard to tell what it might be from the log itself, though, so checking the Filebeat logs to look for failures is probably the next step.

Alternatively, you can still ingest json without using the logstash module -- if you use a plain log input with json settings that should bypass any logstash-specific assumptions that are breaking things. See the log input configuration docs for details. Whether this works for you depends on why you needed the logstash module specifically, though.

Hi @faec, thanks for your suggestion. Your second option is exactly what I ended up doing:

- type: log
    enabled: true

    paths:
        - /var/log/logback/*.log

    fields:
        application:
            name: app-name 

    fields_under_root: true

    json:
        keys_under_root: true
        overwrite_keys: true
        add_error_key: true

This is working as expected.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.