Hello, i have installed elk in an aws ec2 instance ubuntu and filebeat in another ubuntu instance.
Below is my filebeat.yml file
cat /etc/filebeat/filebeat.yml
###################### Filebeat Configuration Example #########################
# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.
# ============================== Filebeat inputs ===============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input-specific configurations.
# filestream is an input for collecting log messages from files.
- type: log
# Unique ID among all inputs, an ID is required.
id: orocrm
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /orocrmprod.log
#- c:\programdata\elasticsearch\logs\*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
# Line filtering happens after the parsers pipeline. If you would like to filter lines
# before parsers, use include_message parser.
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
# Line filtering happens after the parsers pipeline. If you would like to filter lines
# before parsers, use include_message parser.
#include_lines: ['^ERR', '^WARN']
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#prospector.scanner.exclude_files: ['.gz$']
# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
# ============================== Filebeat modules ==============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
# ======================= Elasticsearch template setting =======================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
# ================================== General ===================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false
# The URL from where to download the dashboard archive. By default, this URL
# has a value that is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
# =================================== Kibana ===================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
# =============================== Elastic Cloud ================================
# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:
# ================================== Outputs ===================================
# Configure what output to use when sending the data collected by the beat.
# ---------------------------- Elasticsearch Output ----------------------------
#output.elasticsearch:
# Array of hosts to connect to.
#hosts: ["localhost:9200"]
# Performance preset - one of "balanced", "throughput", "scale",
# "latency", or "custom".
# preset: balanced
# Protocol - either `http` (default) or `https`.
#protocol: "https"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
#username: "elastic"
#password: "changeme"
# ------------------------------ Logstash Output -------------------------------
output.logstash:
# The Logstash hosts
hosts: ["172.31.34.24:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# ================================= Processors =================================
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
# ================================== Logging ===================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors, use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]
# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
# reporting is disabled by default.
# Set to true to enable the monitoring reporter.
#monitoring.enabled: false
# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:
# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch outputs are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:
# ============================== Instrumentation ===============================
# Instrumentation support for the filebeat.
#instrumentation:
# Set to true to enable instrumentation of filebeat.
#enabled: false
# Environment in which filebeat is running on (eg: staging, production, etc.)
#environment: ""
# APM Server hosts to report instrumentation results to.
#hosts:
# - http://localhost:8200
# API Key for the APM Server(s).
# If api_key is set then secret_token will be ignored.
#api_key:
# Secret token for the APM Server(s).
#secret_token:
# ================================= Migration ==================================
# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true
here is my logstash conf file. I have tried with the output to elasticsearch it was not showing anything in kibana so i have used
stdout {
codec => rubydebug
}
here where i got that it was not receiving any logs .
cat /etc/logstash/conf.d/oroprod.conf
input {
beats {
port => 5044
}
}
filter {
grok {
match => {
"message" => "\[%{TIMESTAMP_ISO8601:timestamp}\]\[%{DATA:loglevel}\] %{GREEDYDATA:message}"
}
}
grok {
match => {
"message" => "HOST: %{HOSTNAME:host} REQUEST URI: %{URIPATH:request_uri}"
}
}
}
output {
stdout {
codec => rubydebug
}
}
here is the output it got stuck there it was not printing anything error also
bin/logstash -f /etc/logstash/conf.d/oroprod.conf
Using bundled JDK: /usr/share/logstash/jdk
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2024-06-27 12:54:03.835 [main] runner - NOTICE: Running Logstash as superuser is not recommended and won't be allowed in the future. Set 'allow_superuser' to 'false' to avoid startup errors in future releases.
[INFO ] 2024-06-27 12:54:03.846 [main] runner - Starting Logstash {"logstash.version"=>"8.14.1", "jruby.version"=>"jruby 9.4.7.0 (3.1.4) 2024-04-29 597ff08ac1 OpenJDK 64-Bit Server VM 17.0.11+9 on 17.0.11+9 +indy +jit [x86_64-linux]"}
[INFO ] 2024-06-27 12:54:03.849 [main] runner - JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Dlogstash.jackson.stream-read-constraints.max-string-length=200000000, -Dlogstash.jackson.stream-read-constraints.max-number-length=10000, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED, -Dio.netty.allocator.maxOrder=11]
[INFO ] 2024-06-27 12:54:03.851 [main] runner - Jackson default value override `logstash.jackson.stream-read-constraints.max-string-length` configured to `200000000`
[INFO ] 2024-06-27 12:54:03.851 [main] runner - Jackson default value override `logstash.jackson.stream-read-constraints.max-number-length` configured to `10000`
[WARN ] 2024-06-27 12:54:04.025 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2024-06-27 12:54:04.604 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[INFO ] 2024-06-27 12:54:04.922 [Converge PipelineAction::Create<main>] Reflections - Reflections took 109 ms to scan 1 urls, producing 132 keys and 468 values
[INFO ] 2024-06-27 12:54:05.345 [Converge PipelineAction::Create<main>] javapipeline - Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[WARN ] 2024-06-27 12:54:05.368 [[main]-pipeline-manager] grok - ECS v8 support is a preview of the unreleased ECS v8, and uses the v1 patterns. When Version 8 of the Elastic Common Schema becomes available, this plugin will need to be updated
[WARN ] 2024-06-27 12:54:05.475 [[main]-pipeline-manager] grok - ECS v8 support is a preview of the unreleased ECS v8, and uses the v1 patterns. When Version 8 of the Elastic Common Schema becomes available, this plugin will need to be updated
[INFO ] 2024-06-27 12:54:05.513 [[main]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, "pipeline.sources"=>["/etc/logstash/conf.d/oroprod.conf"], :thread=>"#<Thread:0x73af94cb /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
[INFO ] 2024-06-27 12:54:05.973 [[main]-pipeline-manager] javapipeline - Pipeline Java execution initialization time {"seconds"=>0.46}
[INFO ] 2024-06-27 12:54:05.978 [[main]-pipeline-manager] beats - Starting input listener {:address=>"0.0.0.0:5044"}
[INFO ] 2024-06-27 12:54:05.984 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}
[INFO ] 2024-06-27 12:54:06.001 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2024-06-27 12:54:06.055 [[main]<beats] Server - Starting server on port: 5044
below is my log entry
[2024-06-26 06:56:53][][ HOST: distqa.izdemosite.com REQUEST URI: /user/login] app.CRITICAL: Error Occured while getting visit report images:Doctrine\ORM\Query\QueryException: SELECT mf.mediaFileId as id, CONCAT(mf.rootPath, mf.mediaFileName) as url FROM IZMOMediaLibraryBundle:MediaFile mf WHERE mf.mediaFileId = in /websites/distrigoboost/crm-application/vendor/doctrine/orm/lib/Doctrine/ORM/Query/QueryException.php:41 Stack trace: #0 /websites/diost/crm-application/vendor/doctrine/orm/lib/Doctrine/ORM/Query/Parser.php(448): Doctrine\ORM\Query\QueryException::dqlError('SELECT mf.media...') #1 /websites/distrigoboost/crm-application/vendor/doctrine/orm/lib/Doctrine/ORM/Query/Parser.php(2649): Doctrine\ORM\Query\Parser->syntaxError('Literal') #2 /websites/disst/crm-application/vendor/doctrine/orm/lib/Doctrine/ORM/Query/Parser.php(2837):pplication/vendor/doctrine/orm/lib/Doctrine/ORM/Query/Parser.php(2739): Doctrine\ORM\Query\Parser->ArithmeticFactor() #5 /websites/distrigoboost/crm-apt/crm-application/vendor/doctrine/orm/lib/Doctrine/ORM/AbstractQuery.php(924): Doctrine\ORM\AbstractQuery->exectrigoboost/crm-application/vendor/doctrine/orm/lib/Doctrine/ORM/AbstractQuery.php(804): Doctrine\ORM\AbstractQuery->execute(NULL, NULL) #23 /websites/distrigoboost/crm-application/src/IZMO/MediaLibraryBundle/Entity/Repository/MediaFileRepository.php(128): Doctrine\ORM\AbstractQuery->getSingleResult() #latform/src/Oro/Bundle/UserBundle/Controller/SecurityController.php(99): IZMO\MediaLibraryBundle\Provider\MediaLibrary->getUrlOnLoginPage(NULL) #26 /websites/dist/crm-application/vendor/oro/platform/src/Oro/Bundle/UserBundle/Controller/SecurityController.php(32): Oro\Bundle\UserBundle\Controller\SecurityController->getBadgesInfo() #27 [internal function]: Oro\Bundle\UserBundle\Controller\SecurityController->loginAction() #28 /websites/dist/crm-application/app/bootstrap.php.cache(3242): call_user_func_array(Array, Array) #29 /websites/diost/crm-application/app/bootstrap.php.cache(3201): Symfony\Component\HttpKernel\HttpKernel->handleRaw(Object(Symfony\Component\HttpFoundation\Request), 1) #30 /websites/dist/crm-application/app/bootstrap.php.cache(3355): Symfony\Component\HttpKernel\HttpKernel->handle(Object(Symfony\Component\HttpFoundation\Request), 1, true) #31 /websites/distrigoboost/crm-application/app/bootstrap.php.cache(2540): Symfony\Component\HttpKernel\DependencyInjection\ContainerAwareHttpKernel->handle(Object(Symfony\Component\HttpFoundation\Request), 1, true) #32 /websites/distrigoboost/crm-application/web/app.php(22): Symfony\Component\HttpKernel\Kernel->handle(Object(Symfony\Component\HttpFoundation\Request)) #33 {main} [] []
here in this log entry i no need all things to get parsed i just need the request uri and timstamp log.level that's it so that why i used grok pattern
\[%{TIMESTAMP_ISO8601:timestamp}\]\[\]\[ HOST: %{DATA:host} REQUEST URI: %{DATA:request_uri}\] %{WORD:source}\.%{LOGLEVEL:loglevel}: %{DATA:error_message} %{JAVACLASS:exception_class}: %{DATA:exception_message} in %{DATA:exception_location} Stack trace: %{GREEDYDATA:stack_trace} \[\] \[\]
its giving the correct output in grok debuger why its not displaying anything when i run logstash idk
and there us no filebeat logs in syslog it showing the metric information