Good morning ,
I have installed filebeat , logstash,kibana and elasticsearch on the same server
I want filebeat to push a particular json file to logstash and then appear in kibana
i have configured the filebeat.yml ,logstash.yml file and still yet the index have not been created and nothing in kibana
Filebeat.yml
###################### Filebeat Configuration Example #########################
# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.
# ============================== Filebeat inputs ===============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input-specific configurations.
# filestream is an input for collecting log messages from files.
- type: filestream
# Unique ID among all inputs, an ID is required.
id: my-filestream-id
# Change to true to enable this input configuration.
enabled: false
# Paths that should be crawled and fetched. Glob based paths.
paths:
# - /var/log/*.log
- /mnt/nfs_share/gnmic-new5
#- c:\programdata\elasticsearch\logs\*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
# Line filtering happens after the parsers pipeline. If you would like to filter lines
# before parsers, use include_message parser.
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
# Line filtering happens after the parsers pipeline. If you would like to filter lines
# before parsers, use include_message parser.
#include_lines: ['^ERR', '^WARN']
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#prospector.scanner.exclude_files: ['.gz$']
# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
# journald is an input for collecting logs from Journald
#- type: journald
# Unique ID among all inputs, if the ID changes, all entries
# will be re-ingested
#id: my-journald-id
# The position to start reading from the journal, valid options are:
# - head: Starts reading at the beginning of the journal.
# - tail: Starts reading at the end of the journal.
# This means that no events will be sent until a new message is written.
# - since: Use also the `since` option to determine when to start reading from.
#seek: head
# A time offset from the current time to start reading from.
# To use since, seek option must be set to since.
#since: -24h
# Collect events from the service and messages about the service,
# including coredumps.
#units:
#- docker.service
# ============================== Filebeat modules ==============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
# ======================= Elasticsearch template setting =======================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
# ================================== General ===================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false
# The URL from where to download the dashboard archive. By default, this URL
# has a value that is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
# =================================== Kibana ===================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
# =============================== Elastic Cloud ================================
# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:
# ================================== Outputs ===================================
# Configure what output to use when sending the data collected by the beat.
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
# Performance preset - one of "balanced", "throughput", "scale",
# "latency", or "custom".
preset: balanced
# Protocol - either `http` (default) or `https`.
#protocol: "https"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
#username: "elastic"
#password: "changeme"
# ------------------------------ Logstash Output -------------------------------
#output.logstash:
# The Logstash hosts
hosts: ["localhost:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# ================================= Processors =================================
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
# ================================== Logging ===================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors, use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]
# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
# reporting is disabled by default.
# Set to true to enable the monitoring reporter.
#monitoring.enabled: false
# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:
# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch outputs are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:
# ============================== Instrumentation ===============================
# Instrumentation support for the filebeat.
#instrumentation:
# Set to true to enable instrumentation of filebeat.
#enabled: false
# Environment in which filebeat is running on (eg: staging, production, etc.)
#environment: ""
# APM Server hosts to report instrumentation results to.
#hosts:
# - http://localhost:8200
# API Key for the APM Server(s).
# If api_key is set then secret_token will be ignored.
#api_key:
# Secret token for the APM Server(s).
#secret_token:
# ================================= Migration ==================================
# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true
Logstash.yml
input {
beats {
port => 5044
codec => json
tags => ["telegraf_data"]
}
}
filter {
}
output {
stdout { codec => rubydebug }
elasticsearch {
hosts => ["https://localhost:9200"]
index => "telegraf-logstash"
user => "elastic"
password => "xxxxxxxxxxxxx"
ssl_enabled => true
ssl_certificate_authorities => ["/etc/elasticsearch/certs/http_ca.crt"]
}
}
input file
more gnmic-new5
{
"source": "10.200.226.62:57500",
"subscription-name": "default-1763632897",
"timestamp": 1763630618607619000,
"time": "2025-11-20T10:23:38.607619+01:00",
"updates": [
{
"Path": "interfaces/interface[name=GigabitEthernet0/0]/state/counters/out-
octets",
"values": {
"interfaces/interface/state/counters/out-octets": "312714683"
}
}
]
}
{
"source": "10.200.10.1:57500",
"subscription-name": "default-1763632897",
"timestamp": 1763630618611623000,
"time": "2025-11-20T10:23:38.611623+01:00"
}
{
"sync-response": true
}
{
"source": "10.200.10.1:57500",
"subscription-name": "default-1763632897",
"timestamp": 1763630628590590000,
"time": "2025-11-20T10:23:48.59059+01:00",
"updates": [
{
"Path": "interfaces/interface[name=GigabitEthernet0/0]/state/counters/out-octets",
"values": {
"interfaces/interface/state/counters/out-octets": "312716752"
}
}
]
}
l am also sending you the logs from logstash
tail -f logstash-plain.log
[2025-11-19T18:05:34,206][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (9.1.3) {:es_version=>9}
[2025-11-19T18:05:34,223][INFO ][logstash.outputs.elasticsearch][main] Not eligible for data streams because config contains one or more settings that are not compatible with data streams: {"index"=>"telegraf-logstash"}
[2025-11-19T18:05:34,224][INFO ][logstash.outputs.elasticsearch][main] Data streams auto configuration (`data_stream => auto` or unset) resolved to `false`
[2025-11-19T18:05:34,248][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>9, :ecs_compatibility=>:v8}
[2025-11-19T18:05:34,253][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/etc/logstash/conf.d/logstash_demo.conf"], :thread=>"#<Thread:0x4509af25 /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:138 run>"}
[2025-11-19T18:05:35,067][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>0.81}
[2025-11-19T18:05:35,078][INFO ][logstash.inputs.beats ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2025-11-19T18:05:35,093][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2025-11-19T18:05:35,105][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2025-11-19T18:05:35,204][INFO ][org.logstash.beats.Server][main][d528804e0c151115e76e11a79671915e7cdc616fb287f2cee7e1c0fab0085b0b] Starting server on port: 5044