Output in kibana not expected

i use filebeat to parse a json file and the expected output is showing all the fields in the different lines and not as one document

This the input file

{
  "source": "10.200.10.1:57500",
  "subscription-name": "default-1764331045",
  "timestamp": 1764328788493751000,
  "time": "2025-11-28T12:19:48.493751+01:00",
  "updates": [
    {
      "Path": "interfaces/interface[name=GigabitEthernet0/0]/state/counters/out-
octets",
      "values": {
        "interfaces/interface/state/counters/out-octets": "334577330"
      }
    }
  ]
}
{
  "source": "10.200.10.1:57500",
  "subscription-name": "default-1764331045",
  "timestamp": 1764328788497842000,
  "time": "2025-11-28T12:19:48.497842+01:00"
}
{
  "sync-response": true

and this is the filebeat.yml file

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

# ============================== Filebeat inputs ===============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input-specific configurations.

# filestream is an input for collecting log messages from files.
- type: filestream

  # Unique ID among all inputs, an ID is required.
  #id: my-filestream-id
  id: multiline-json

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    #- /var/log/*.log
    #- c:\programdata\elasticsearch\logs\
     - /mnt/nfs_share/gnmic-new8
  multiline:
    pattern: '^\s*\{'
#    pattern: '^\{'
    negate: true
    match: after
    max_lines: 500
    timeout: 5s

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  # Line filtering happens after the parsers pipeline. If you would like to filter lines
  # before parsers, use include_message parser.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  # Line filtering happens after the parsers pipeline. If you would like to filter lines
  # before parsers, use include_message parser.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #prospector.scanner.exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

# journald is an input for collecting logs from Journald
#- type: journald

  # Unique ID among all inputs, if the ID changes, all entries
  # will be re-ingested
  #id: my-journald-id

  # The position to start reading from the journal, valid options are:
  #  - head: Starts reading at the beginning of the journal.
  #  - tail: Starts reading at the end of the journal.
  #    This means that no events will be sent until a new message is written.
  #  - since: Use also the `since` option to determine when to start reading from.
  #seek: head

  # A time offset from the current time to start reading from.
  # To use since, seek option must be set to since.
  #since: -24h

  # Collect events from the service and messages about the service,
  # including coredumps.
  #units:
# ============================== Filebeat modules ==============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false


# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboard archive. By default, this URL
# has a value that is computed based on the Beat name and version. For released

# The URL from where to download the dashboard archive. By default, this URL
# has a value that is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  host: "localhost:5601"
  username: "elastic"
  password: "IdTwHP-t+yWZD0-g4SVe"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.
# ---------------------------- Elasticsearch Output ----------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
#  hosts: ["https://10.200.253.78:9200"]

  # Performance preset - one of "balanced", "throughput", "scale",
  # "latency", or "custom".
  preset: balanced

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  username: "elastic"
  password: "IdTwHP-t+yWZD0-g4SVe"
  ssl.certificate_authorities: ["/etc/elasticsearch/certs/http_ca.crt"]

# ------------------------------ Logstash Output -------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - drop_fields:
      fields: ["agent.version","agent.type","agent.id","agent.ephemeral_id","log.file.fingerprint","event.original","log.file.inode","log.file.device_id","_id","_ignored","_score","event.original","ecs.version","host","mac","log.file.vol","log.file.idxhi","log.file.idxlo","log.offset"]
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~
  - add_kubernetes_metadata: ~
  - decode_json_fields:
     fields: ["message"]
     target: ""
     overwrite_keys: true
#     ignore_missing: true
# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors, use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]

# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch outputs are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

# ============================== Instrumentation ===============================

# Instrumentation support for the filebeat.
#instrumentation:
# Instrumentation support for the filebeat.
#instrumentation:
    # Set to true to enable instrumentation of filebeat.
    #enabled: false

    # Environment in which filebeat is running on (eg: staging, production, etc.)
    #environment: ""

    # APM Server hosts to report instrumentation results to.
    #hosts:
    #  - http://localhost:8200

    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.
    #api_key:

    # Secret token for the APM Server(s).
    #secret_token:


# ================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

and this is the output in kibana


Hello @DOkuwa

As per the documentation for multiline the syntax used is wrong & that is the reason it picked up individual line as events :

As per the documentation i used below :

 parsers:
    - multiline:
        type: pattern
        pattern: '^{'
        negate: true
        match: after
        max_lines: 500
        timeout: 5s

The document is picked up between {} instead of individual line :

Thanks!!

good morning

what is the correct syntax

Thanks

Hello @DOkuwa

I had used below :

Thanks!!

Hi @DOkuwa

Perhaps take a look at this as well

hi,

thanks

the code worked

very well appreciated

There is a danger with the above code / solution if you get JSON that is not properly indented, it will not work. That is why in general, ndjson is typically the better approach in the long term

hi,

what is the code of the other option ndjson

Thanks

Hi @DOkuwa

I'm not clear what you're asking, but once you have ndjson you just use the parser

how do you place this ndjson and where

Hi @DOkuwa

In the post I linked above... I gave details

Perhaps some confusion, sorry

You need the source of those logs to write the log file in ndjson format or you need to convert the file using a tool such as jq before applying filebeat. That is up to you / your system.

So each line would look like this.

{"source":"10.200.10.1:57500","subscription-name":"default-1764331045","timestamp":1764328788493751000,"time":"2025-11-28T12:19:48.493751+01:00","updates":[{"Path":"interfaces/interface[name=GigabitEthernet0/0]/state/counters/out-octets","values":{"interfaces/interface/state/counters/out-octets":"334577330"}}]}