How to add logging integration for getting filebeat logs in kibana dashboard

We are not getting the logs as filebeat is not configured, so please help me in logging integration for kibana dashboard

Hello @kirankumarb,

Just to confirm, have you tried configuring it and encountered any issues, or do you need a getting started guide to understand how to do it?

This guide describes how to get started quickly with log collection and visualize the log data in Kibana.

Hope this helps!

No, All cononfigurations done. Now the problem is we are getting logs for few services, but unable to get logs of services in kibana. Can you help me with this?
We are unable to find the issue of this.

Did you start Filebeat? Are there any error messages?

Yes, Filebeat is running and I have checked logs, no errors are coming but unable to get these logs in kibana

If it's started and without errors but you don't see data in Kibana, try changing the time filter to a larger range, check the timezone and make sure the predefined filebeat-* index pattern is selected.

The main issue is I'm unable to create the Index patterns as kibana is not getting the indices of the specified service, While trying to create index pattern It is showing like -'No matches found, you can try creating by other available indices' like that it is showing

Typically, Filebeat automatically sets up the index pattern when it loads the index template.

However, there may be cases where Filebeat loads the index template but the index pattern is not created correctly.

If you are unable to find the pattern in the Management app in Kibana, you can try the following steps:

  1. Run the setup command again, for example: ./filebeat setup.

  2. If the pattern still doesn't exist, you can create it manually.

  • Set the Time filter field name to @timestamp.
  • Set the Custom index pattern ID advanced option. For example, if your custom index name is filebeat-customname, set the custom index pattern ID to filebeat-customname-*.

Hope this helps!

Ok, I think it will helpful, but on
#1 Where can I find ./filebeat setup is it filebeat.yml file or any other?
#2 I am not getting the indices to create the index patterns

  • Where can I find ./filebeat setup?

Well, this was part of the tutorial I shared. Step 4, here. This step loads the recommended index template for writing to Elasticsearch and deploys the sample dashboards for visualizing the data in Kibana.

Load the index template.

Load Kibana dashboards.

I thought you had already completed all the configurations when you mentioned that, but let's go ahead and do that before trying anything else.

Btw, as mentioned, make sure you are searching for the predefined filebeat-* index pattern.

While running the : filebeat setup -e command getting error as below:

Yes, we are searching for predifined filebeat-*

Could you share your filebeat.yml?

The filebeat.yml as below:

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
# You can find the full configuration reference here:

# For more available modules and options, please see the filebeat.reference.yml samp
# configuration file.

#=========================== Filebeat inputs =============================
  enabled: true
  path: ${path.config}/configs/*.yml

#============================= Filebeat modules ===============================

  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#  env: staging

#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the
# website.

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhos
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.

#============================= Elastic Cloud ==================================

# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elast

# The setting overwrites the `output.elasticsearch.hosts` and
# `` options.
# You can find the `` in the Elastic Cloud web UI.

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
  # Array of hosts to connect to.
  #hosts: ["localhost:9200"]

  # Optional protocol and basic auth credentials.
  #protocol: "xx"
  #username: "xxx"
  #password: "xxxx"

#----------------------------- Logstash output --------------------------------
  # The Logstash hosts
  hosts: [""]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  ssl.certificate_authorities: ["/etc/filebeat/certs/ca.crt"]

  # Certificate for SSL client authentication
  ssl.certificate: "/etc/filebeat/certs/elk.crt"

  # Client Certificate Key
  ssl.key: "/etc/filebeat/certs/elk.key"

#================================ Procesors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

  - add_host_metadata: ~
  - add_cloud_metadata: ~

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

#============================== Xpack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#xpack.monitoring.enabled: false

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line.

In your filebeat.yml set the host and port where Filebeat can find the Elasticsearch installation output.elasticsearch. If you are running on Elastic Cloud, you can specify your Cloud ID.

I had the same problem but with inserting documents into a custom index name. I needed to create a user role which had permission to write to my index. You can set this up with users and roles REST API or you can do it in Kibana going to hamburger menu => Management, and clicking on Roles.

This will take you to a list of user roles. Find out the user you write to Elasticsearch with from Filebeat and make sure one of its roles has write access to your target index.

It may be simply be that the user you have writing to Elasticsearch from Filebeat has no write permission to Filebeat's default index.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.