Not able to create index patterns as kibana is not getting the indices

-I am getting indices for most of services, but unable to get indices for few services. So I am unable to create index patterns. We are getting logs in servers but unable to see logs in Kibana dashboard.
_Filebeat is up and running
-Please help on this

What do the Filebeat logs show?

The logs of filebeat are as below:

May 29 09:29:44   eu-central-1.compute.internal filebeat[26972]: 2023-05-29T09:29:44.130Z        INFO        [monitoring]        log/log.go:145        Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":3681
May 29 09:30:07   eu-central-1.compute.internal filebeat[26972]: 2023-05-29T09:30:07.442Z        INFO        log/harvester.go:324        File is inactive: /var/log/slapd/slapd.log. Closing because close_inactive of 5m0s reached.
May 29 09:30:14   eu-central-1.compute.internal filebeat[26972]: 2023-05-29T09:30:14.130Z        INFO        [monitoring]        log/log.go:145        Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":3682
May 29 09:30:35   eu-central-1.compute.internal filebeat[26972]: 2023-05-29T09:30:35.392Z        INFO        log/harvester.go:297        Harvester started for file: /var/log/slapd/slapd.log
May 29 09:30:44   eu-central-1.compute.internal filebeat[26972]: 2023-05-29T09:30:44.130Z        INFO        [monitoring]        log/log.go:145        Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":3682
May 29 09:31:14   eu-central-1.compute.internal filebeat[26972]: 2023-05-29T09:31:14.130Z        INFO        [monitoring]        log/log.go:145        Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":3682
May 29 09:31:44   eu-central-1.compute.internal filebeat[26972]: 2023-05-29T09:31:44.130Z        INFO        [monitoring]        log/log.go:145        Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":3683
May 29 09:32:14   eu-central-1.compute.internal filebeat[26972]: 2023-05-29T09:32:14.130Z        INFO        [monitoring]        log/log.go:145        Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":3683
May 29 09:32:44   eu-central-1.compute.internal filebeat[26972]: 2023-05-29T09:32:44.131Z        INFO        [monitoring]        log/log.go:145        Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":3683
May 29 09:33:14   eu-central-1.compute.internal filebeat[26972]: 2023-05-29T09:33:14.130Z        INFO        [monitoring]        log/log.go:145        Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":3684
May 29 09:33:44   eu-central-1.compute.internal filebeat[26972]: 2023-05-29T09:33:44.131Z        INFO        [monitoring]        log/log.go:145        Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":3684
May 29 09:34:14   eu-central-1.compute.internal filebeat[26972]: 2023-05-29T09:34:14.131Z        INFO        [monitoring]        log/log.go:145        Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":3685
May 29 09:34:44   eu-central-1.compute.internal filebeat[26972]: 2023-05-29T09:34:44.130Z        INFO        [monitoring]        log/log.go:145        Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":3685
May 29 09:35:14   eu-central-1.compute.internal filebeat[26972]: 2023-05-29T09:35:14.132Z        INFO        [monitoring]        log/log.go:145        Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":3685
May 29 09:35:44   eu-central-1.compute.internal filebeat[26972]: 2023-05-29T09:35:44.130Z        INFO        [monitoring]        log/log.go:145        Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":3685
May 29 09:36:14   eu-central-1.compute.internal filebeat[26972]: 2023-05-29T09:36:14.132Z        INFO        [monitoring]        log/log.go:145        Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":3686
May 29 09:36:44   eu-central-1.compute.internal filebeat[26972]: 2023-05-29T09:36:44.131Z        INFO        [monitoring]        log/log.go:145        Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":3686
May 29 09:37:14   eu-central-1.compute.internal filebeat[26972]: 2023-05-29T09:37:14.130Z        INFO        [monitoring]        log/log.go:145        Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":3686
May 29 09:37:44   eu-central-1.compute.internal filebeat[26972]: 2023-05-29T09:37:44.134Z        INFO        [monitoring]        log/log.go:145        Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":3687
May 29 09:37:55   eu-central-1.compute.internal filebeat[26972]: 2023-05-29T09:37:55.437Z        INFO        log/harvester.go:324        File is inactive: /var/log/slapd/slapd.log. Closing because close_inactive of 5m0s reached.
May 29 09:38:14   eu-central-1.compute.internal filebeat[26972]: 2023-05-29T09:38:14.130Z        INFO        [monitoring]        log/log.go:145        Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":3687
May 29 09:38:14   eu-central-1.compute.internal filebeat[26972]: 2023-05-29T09:38:14.535Z        INFO        log/harvester.go:324        File is inactive: /opt/cdpdeployment/WSO2ISMANAGER/wso2is-5.10.0/repository/logs/wso2carbon.log. Closing because close_
May 29 09:38:17   eu-central-1.compute.internal filebeat[26972]: 2023-05-29T09:38:17.521Z        INFO        log/harvester.go:324        File is inactive: /opt/cdpdeployment/WSO2ISMANAGER/wso2is-5.10.0/repository/logs/audit.log. Closing because close_inact
May 29 09:38:44   eu-central-1.compute.internal filebeat[26972]: 2023-05-29T09:38:44.130Z        INFO        [monitoring]        log/log.go:145        Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":3687
May 29 09:39:14   eu-central-1.compute.internal filebeat[26972]: 2023-05-29T09:39:14.130Z        INFO        [monitoring]        log/log.go:145        Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":3688

What does your Filebeat config look like? Cause it seems that Filebeat is reading the files but seeing no new data for it to process.

I have attached the filebeat config file:

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml samp
# configuration file.

#=========================== Filebeat inputs =============================
filebeat.config.inputs:
  enabled: true
  path: ${path.config}/configs/*.yml

#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhos
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

#============================= Elastic Cloud ==================================

# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elast

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  #hosts: ["localhost:9200"]

  # Optional protocol and basic auth credentials.
  #protocol: "xxxxx"
  #username: "xxxxxxxx"
  #password: "xxxxxxxxxxx"

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["xxx-xxxx-logstash-1xxxxxxxxxxxxxxx.xx.xx-central-1.amazonaws.com:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  ssl.certificate_authorities: ["/etc/filebeat/certs/xxx-ca.crt"]

  # Certificate for SSL client authentication
  ssl.certificate: "/etc/filebeat/certs/xxx-xxxxxxx-elk.crt"

  # Client Certificate Key
  ssl.key: "/etc/filebeat/certs/xxx-xxxxxx-elk.key"

#================================ Procesors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

#============================== Xpack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#xpack.monitoring.enabled: false

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line.
#xpack.monitoring.elasticsearch:


Is there anything in this path?

No nothing there, only * which refers to all files I think

Thing is I can't see where you are pulling in the files it's reading, so I am not sure what's happening sorry.

Ooh ok, I thought in another way some communication gap, Above path has the following files


The configuration.yml file is as below:

- type: log
  enabled:  true
  paths:
    - /var/log/slapd/slapd.log
  fields:
    {log_type: openldap}
  multiline.match: after
  multiline.negate: true
  multiline.pattern: "^[0-9]{4}-[0-9]{2}-[0-9]{2}"
  exclude_lines: ['DEBUG','INFO']

- type: log
  enabled:  true
  paths:
    - /opt/cdpdeployment/WSO2ISMANAGER/wso2is-5.10.0/repository/logs/wso2carbon.log
  fields:
    {log_type: wso2_is}
  multiline.match: after
  multiline.negate: true
  multiline.pattern: "^[0-9]{4}-[0-9]{2}-[0-9]{2}"
  exclude_lines: ['DEBUG','INFO']

- type: log
  enabled:  true
  paths:
    - /opt/cdpdeployment/WSO2ISMANAGER/wso2is-5.10.0/repository/logs/audit.log
  fields:
    {log_type: wso2_is}
  multiline.match: after
  multiline.negate: true
  multiline.pattern: "^[0-9]{4}-[0-9]{2}-[0-9]{2}"
  exclude_lines: ['DEBUG','INFO']

- type: log
  enabled:  true
  paths:
    - /opt/rh/httpd24/root/etc/httpd/logs/ssl_error_log
  fields:
    {log_type: reverseproxy}
  multiline.match: after
  multiline.negate: true
  multiline.pattern: "^[0-9]{4}-[0-9]{2}-[0-9]{2}"
  exclude_lines: ['DEBUG','INFO']

- type: log
  enabled:  true
  paths:
    - /opt/rh/httpd24/root/etc/httpd/logs/ssl_access_log
  fields:
    {log_type: reverseproxy}
  multiline.match: after
  multiline.negate: true
  multiline.pattern: "^[0-9]{4}-[0-9]{2}-[0-9]{2}"
  exclude_lines: ['DEBUG','INFO']

- type: log
  enabled:  true
  paths:
    - /opt/rh/httpd24/root/etc/httpd/logs/ssl_request_log
  fields:
    {log_type: reverseproxy}
  multiline.match: after
  multiline.negate: true
  multiline.pattern: "^[0-9]{4}-[0-9]{2}-[0-9]{2}"
  exclude_lines: ['DEBUG','INFO']

Hi @warkolm - I have shared the data, now you can see where we are pulling in the files it's reading. Please try to provide solution for this issue.

Hi @warkolm - Please provide the next set of actions or solution to the issue

It doesn't look like there is any new data being added to those files, so Filebeat has nothing to process.

No, We have checked the above files and LATEST DATA is AVAILABLE in those files. The main issue is we are not getting this data and indices for wso2 service in kibana dashboard.
We are unable to find What is the issue for this, Please suggest actions to take to get the data and indices in kibana dashboard, so that we can create Index patterns for wso2 service

Hi @warkolm - As mentioned, I have checked the above mentioned files, they are getting the latest logs and the logs don't have any error.
Please give your inputs on not getting the indices or logs in kibana dashboard

Hi @warkolm - Any update on above issue?

I'm not sure sorry. You could try running Filebeat in debug to see if it shows anything different.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.