I have executed the below command
filebeat setup -e
got this error from the above command.
{"log.level":"warn","@timestamp":"2023-10-13T15:33:15.664+0530","log.origin":{"file.name":"beater/filebeat.go","file.line":193},"message":"Filebeat is unable to load the ingest pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the ingest pipelines or are using Logstash pipelines, you can ignore this warning.","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2023-10-13T15:33:15.664+0530","log.origin":{"file.name":"instance/beat.go","file.line":1278},"message":"Exiting: index management requested but the Elasticsearch output is not configured/enabled","service.name":"filebeat","ecs.version":"1.6.0"}
Exiting: index management requested but the Elasticsearch output is not configured/enabled
filebeat.yml
##################### Filebeat Configuration Example #########################
# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.
# ============================== Filebeat inputs ===============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input-specific configurations.
# filestream is an input for collecting log messages from files.
- type: log
# Unique ID among all inputs, an ID is required.
id: my-filestream-id
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
#- /var/log/*.log
- /home/pramod_amrutkar@acds.net.in/SCC/SCC/scc-docker/scc-java-logs/scc_java_logs.log
#- c:\programdata\elasticsearch\logs\*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
# Line filtering happens after the parsers pipeline. If you would like to filter lines
# before parsers, use include_message parser.
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
# Line filtering happens after the parsers pipeline. If you would like to filter lines
# before parsers, use include_message parser.
#include_lines: ['^ERR', '^WARN']
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#prospector.scanner.exclude_files: ['.gz$']
# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
# ============================== Filebeat modules ==============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
# ======================= Elasticsearch template setting =======================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
# ================================== General ===================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false
# The URL from where to download the dashboard archive. By default, this URL
# has a value that is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
# =================================== Kibana ===================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
# =============================== Elastic Cloud ================================
# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:
# ================================== Outputs ===================================
# Configure what output to use when sending the data collected by the beat.
# ---------------------------- Elasticsearch Output ----------------------------
#output.elasticsearch:
# Array of hosts to connect to.
#hosts: ["localhost:9200"]
# Protocol - either `http` (default) or `https`.
#protocol: "https"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
#username: "elastic"
#password: "MtJwS_qZJDm1936dTzD_"
# ------------------------------ Logstash Output -------------------------------
output.logstash:
# The Logstash hosts
hosts: ["localhost:5044"]
#loadbalance: true
#index: filebeat
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# ================================= Processors =================================
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
# ================================== Logging ===================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors, use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]
# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
# reporting is disabled by default.
# Set to true to enable the monitoring reporter.
#monitoring.enabled: false
# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:
# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch outputs are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:
# ============================== Instrumentation ===============================
# Instrumentation support for the filebeat.
#instrumentation:
# Set to true to enable instrumentation of filebeat.
#enabled: false
# Environment in which filebeat is running on (eg: staging, production, etc.)
#environment: ""
# APM Server hosts to report instrumentation results to.
#hosts:
# - http://localhost:8200
# API Key for the APM Server(s).
# If api_key is set then secret_token will be ignored.
#api_key:
# Secret token for the APM Server(s).
#secret_token:
# ================================= Migration ==================================
# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true
logstash conf file
input {
beats {
port => 5044
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => ["https://localhost:9200"]
#manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}"
action => "create"
#pipeline => "scc_java_logs"
user => "elastic"
password => "MtJwS_qZJDm1936dTzD_"
ssl_certificate_verification => false
data_stream => false
}
}
logs from the logstash
tail -f /var/log/logstash/logstash-plain.log
[2023-10-13T15:13:21,225][WARN ][logstash.outputs.elasticsearch][scc_java_logs] Restored connection to ES instance {:url=>"https://elastic:xxxxxx@localhost:9200/"}
[2023-10-13T15:13:21,230][INFO ][logstash.outputs.elasticsearch][scc_java_logs] Elasticsearch version determined (8.10.3) {:es_version=>8}
[2023-10-13T15:13:21,230][WARN ][logstash.outputs.elasticsearch][scc_java_logs] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>8}
[2023-10-13T15:13:21,245][INFO ][logstash.outputs.elasticsearch][scc_java_logs] Using a default mapping template {:es_version=>8, :ecs_compatibility=>:v8}
[2023-10-13T15:13:21,251][INFO ][logstash.javapipeline ][scc_java_logs] Starting pipeline {:pipeline_id=>"scc_java_logs", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, "pipeline.sources"=>["/etc/logstash/conf.d/scc_java_logs.conf"], :thread=>"#<Thread:0x1ae5d7bd /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
[2023-10-13T15:13:21,797][INFO ][logstash.javapipeline ][scc_java_logs] Pipeline Java execution initialization time {"seconds"=>0.54}
[2023-10-13T15:13:21,801][INFO ][logstash.inputs.beats ][scc_java_logs] Starting input listener {:address=>"0.0.0.0:5044"}
[2023-10-13T15:13:21,807][INFO ][logstash.javapipeline ][scc_java_logs] Pipeline started {"pipeline.id"=>"scc_java_logs"}
[2023-10-13T15:13:21,819][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:scc_java_logs], :non_running_pipelines=>[]}
[2023-10-13T15:13:21,879][INFO ][org.logstash.beats.Server][scc_java_logs][d512deeeb43f14826067e0130e100ed27e52fb6104306f14ab1b1486edfe3bbe] Starting server on port: 5044
Can someone tell me what I'm doing wrong here?