No logs for current timestamp shown in Disover tab

Hello,

Most recently the security feature (xpack.security) was enabled and the changes were only in the elasticsearch.yml and kibana.yml. Because of this change it was possible to have the authentication while accessing the Kibana GUI.

Now it is seen that there are no logs shown in the Discover tab from the Kibana GUI and the logs for logstash says as below.

# tail -f /var/log/logstash/logstash-plain.log 
[2024-03-08T09:39:14,431][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'http://localhost:9200/'"}
[2024-03-08T09:39:19,433][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'http://localhost:9200/'"}
[2024-03-08T09:39:22,377][ERROR][logstash.outputs.elasticsearch][main][0b812257e2768225915fa474b522fb5ddbed6c2d365822c0280d8094de8ee6a7] Attempted to send a bulk request but there are no living connections in the pool (perhaps Elasticsearch is unreachable or down?) {:message=>"No Available connections", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError, :will_retry_in_seconds=>64}
[2024-03-08T09:39:22,377][ERROR][logstash.outputs.elasticsearch][main][0b812257e2768225915fa474b522fb5ddbed6c2d365822c0280d8094de8ee6a7] Attempted to send a bulk request but there are no living connections in the pool (perhaps Elasticsearch is unreachable or down?) {:message=>"No Available connections", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError, :will_retry_in_seconds=>64}
[2024-03-08T09:39:22,388][ERROR][logstash.outputs.elasticsearch][main][0b812257e2768225915fa474b522fb5ddbed6c2d365822c0280d8094de8ee6a7] Attempted to send a bulk request but there are no living connections in the pool (perhaps Elasticsearch is unreachable or down?) {:message=>"No Available connections", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError, :will_retry_in_seconds=>64}
[2024-03-08T09:39:22,388][ERROR][logstash.outputs.elasticsearch][main][0b812257e2768225915fa474b522fb5ddbed6c2d365822c0280d8094de8ee6a7] Attempted to send a bulk request but there are no living connections in the pool (perhaps Elasticsearch is unreachable or down?) {:message=>"No Available connections", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError, :will_retry_in_seconds=>64}
[2024-03-08T09:39:24,435][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'http://localhost:9200/'"}
[2024-03-08T09:39:29,437][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'http://localhost:9200/'"}
[2024-03-08T09:39:34,439][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'http://localhost:9200/'"}
[2024-03-08T09:39:39,441][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'http://localhost:9200/'"}
[2024-03-08T09:39:44,442][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'http://localhost:9200/'"}

Thanks,
Ravi

Hello,

I am also sharing the details of elasticsearch.yml and logstash conf

# cat /etc/elasticsearch/elasticsearch.yml
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: localhost
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#Single node Elastic stack
discovery.type: single-node
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
#
# ---------------------------------- Security ----------------------------------
xpack.security.enabled: true

#
#                                 *** WARNING ***
#
# Elasticsearch security features are not enabled by default.
# These features are free, but require configuration changes to enable them.
# This means that users don’t have to provide credentials and can get full access
# to the cluster. Network connections are also not encrypted.
#
# To protect your data, we strongly encourage you to enable the Elasticsearch security features. 
# Refer to the following documentation for instructions.
#
# https://www.elastic.co/guide/en/elasticsearch/reference/7.16/configuring-stack-security.html
# cat /etc/logstash/conf.d/02-beats-input.conf 
input {
  beats {
    port => 5044
  }
}

output {
    elasticsearch {
        hosts => [ "http://localhost:9200" ]

        ssl_certificate_verification => false

        user => "elastic"

        password => "FnbEUnD0Q0jj6385Z0MO"
  }
}

Thanks,
Ravi

Check the username and password in your logstash elasticsearch output configuration, this error means that one of them is not correct.

401 means Unauthourized.

Hello,

I have verified on the same and below is the logstash conf.

input {
  beats {
    port => 5044
  }
}

output {
    elasticsearch {
        hosts => [ "http://localhost:9200" ]
        ssl_certificate_verification => false
        user => "elastic"
        password => "wakfgtqwJYKgYdYsKkE8"
  }
}

Also with the above user and password details I am able to run the curl and which give me output as below.

# curl -X GET -u elastic:wakfgtqwJYKgYdYsKkE8 "localhost:9200/"
{
  "name" : "ip-192-168-135-26",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "6Vzvcn7yTduloVhwrkG1jA",
  "version" : {
    "number" : "7.17.15",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "0b8ecfb4378335f4689c4223d1f1115f16bef3ba",
    "build_date" : "2023-11-10T22:03:46.987399016Z",
    "build_snapshot" : false,
    "lucene_version" : "8.11.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

I tried restarting the logstash but it seen freezes and no response at all on the console.

Also the logs still reflecting as below.

root@ip-192-168-x-x:~# systemctl restart logstash

[2024-03-08T17:53:45,885][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'http://localhost:9200/'"}

Just a ask if I am missing anything here.

Thanks,
Ravi

Hello,

I have verified the configurations and I am not getting where is the missing bit. Because I am still facing issues with the logstash because I couldn't see logs showing up in the Discover tab on the Kibana GUI for the latest.

input {
  beats {
    port => 5044
  }
}

output {
    elasticsearch {
        hosts => [ "localhost:9200" ]
        user => elastic
        password => wakfgtqwJYKgYdYsKkE8
  }
}

To verify on the username and password mentioned above. I tried to run the curl command and it shown working.

# curl -X GET -u elastic:wakfgtqwJYKgYdYsKkE8 "localhost:9200/"
{
  "name" : "ip-192-168-135-26",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "6Vzvcn7yTduloVhwrkG1jA",
  "version" : {
    "number" : "7.17.15",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "0b8ecfb4378335f4689c4223d1f1115f16bef3ba",
    "build_date" : "2023-11-10T22:03:46.987399016Z",
    "build_snapshot" : false,
    "lucene_version" : "8.11.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

And the systemctl status logstash still showing with the errors as below.

~# systemctl status logstash
● logstash.service - logstash
     Loaded: loaded (/etc/systemd/system/logstash.service; enabled; vendor preset: enabled)
     Active: active (running) since Fri 2024-03-08 23:08:04 UTC; 2min 13s ago
   Main PID: 437 (java)
      Tasks: 61 (limit: 18910)
     Memory: 1003.4M
        CPU: 1min 30.825s
     CGroup: /system.slice/logstash.service
             └─437 /usr/share/logstash/jdk/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiating>

Mar 08 23:09:55 ip-192-168-135-26 logstash[437]: [2024-03-08T23:09:55,797][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connec>
Mar 08 23:09:55 ip-192-168-135-26 logstash[437]: [2024-03-08T23:09:55,868][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connec>
Mar 08 23:10:00 ip-192-168-135-26 logstash[437]: [2024-03-08T23:10:00,803][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connec>
Mar 08 23:10:00 ip-192-168-135-26 logstash[437]: [2024-03-08T23:10:00,874][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connec>
Mar 08 23:10:05 ip-192-168-135-26 logstash[437]: [2024-03-08T23:10:05,810][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connec>
Mar 08 23:10:05 ip-192-168-135-26 logstash[437]: [2024-03-08T23:10:05,880][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connec>
Mar 08 23:10:10 ip-192-168-135-26 logstash[437]: [2024-03-08T23:10:10,816][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connec>
Mar 08 23:10:10 ip-192-168-135-26 logstash[437]: [2024-03-08T23:10:10,886][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connec>
Mar 08 23:10:15 ip-192-168-135-26 logstash[437]: [2024-03-08T23:10:15,822][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connec>
Mar 08 23:10:15 ip-192-168-135-26 logstash[437]: [2024-03-08T23:10:15,891][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connec>

And the logs as below.

# tail -f /var/log/logstash/logstash-plain.log 
[2024-03-08T23:16:11,227][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'http://localhost:9200/'"}
[2024-03-08T23:16:11,276][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'http://localhost:9200/'"}

Please need some help here, as this is a production server.

Thanks,
Ravi

The error is still the same, 401 means not authorized, there is not much else about it.

The user or probably the password is not correct in your configuration.

Kill the logstash process and check every configuration file and make sure that logstash is running the correct configuration, like validate in your pipelines.yml if logstash is pointing to the configuration with the correct password.

On a previous answer you shard a completely different password.

There is not much I can help.

Also keep in mind that there is no SLA in this forum.

Hello,

Yes, that was a mistake that I copied the password from the test instance instead from the production text file on my local. But in my latest it is the correct one and is from the production configuration.

Thanks,
Ravi

Share your pipelines.yml

And logstash.yml

And what directory listing of the

/etc/logstash/conf.d

In the logstash logs it should show what. Pipelines are getting loaded I suspect it is one of the samples or something or you have more than one

Hello,

Please find the details.

cat pipelines.yml

# This file is where you define your pipelines. You can define multiple.
# For more information on multiple pipelines, see the documentation:
#   https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html

- pipeline.id: main
  path.config: "/etc/logstash/conf.d/*.conf"

cat logstash.yml

# Settings file in YAML
#
# Settings can be specified either in hierarchical form, e.g.:
#
#   pipeline:
#     batch:
#       size: 125
#       delay: 5
#
# Or as flat keys:
#
#   pipeline.batch.size: 125
#   pipeline.batch.delay: 5
#
# ------------  Node identity ------------
#
# Use a descriptive name for the node:
#
# node.name: test
#
# If omitted the node name will default to the machine's host name
#
# ------------ Data path ------------------
#
# Which directory should be used by logstash and its plugins
# for any persistent needs. Defaults to LOGSTASH_HOME/data
#
path.data: /var/lib/logstash
#
# ------------ Pipeline Settings --------------
#
# The ID of the pipeline.
#
# pipeline.id: main
#
# Set the number of workers that will, in parallel, execute the filters+outputs
# stage of the pipeline.
#
# This defaults to the number of the host's CPU cores.
#
# pipeline.workers: 2
#
# How many events to retrieve from inputs before sending to filters+workers
#
# pipeline.batch.size: 125
#
# How long to wait in milliseconds while polling for the next event
# before dispatching an undersized batch to filters+outputs
#
# pipeline.batch.delay: 50
#
# Force Logstash to exit during shutdown even if there are still inflight
# events in memory. By default, logstash will refuse to quit until all
# received events have been pushed to the outputs.
#
# WARNING: enabling this can lead to data loss during shutdown
#
# pipeline.unsafe_shutdown: false
#
# Set the pipeline event ordering. Options are "auto" (the default), "true" or "false".
# "auto" will  automatically enable ordering if the 'pipeline.workers' setting
# is also set to '1'.
# "true" will enforce ordering on the pipeline and prevent logstash from starting
# if there are multiple workers.
# "false" will disable any extra processing necessary for preserving ordering.
#
# pipeline.ordered: auto
#
# Sets the pipeline's default value for `ecs_compatibility`, a setting that is
# available to plugins that implement an ECS Compatibility mode for use with
# the Elastic Common Schema.
# Possible values are:
# - disabled (default)
# - v1
# - v8
# The default value will be `v8` in Logstash 8, making ECS on-by-default. To ensure a
# migrated pipeline continues to operate as it did before your upgrade, opt-OUT
# of ECS for the individual pipeline in its `pipelines.yml` definition. Setting
# it here will set the default for _all_ pipelines, including new ones.
#
# pipeline.ecs_compatibility: disabled
#
# ------------ Pipeline Configuration Settings --------------
#
# Where to fetch the pipeline configuration for the main pipeline
#
# path.config:
#
# Pipeline configuration string for the main pipeline
#
# config.string:
#
# At startup, test if the configuration is valid and exit (dry run)
#
# config.test_and_exit: false
#
# Periodically check if the configuration has changed and reload the pipeline
# This can also be triggered manually through the SIGHUP signal
#
# config.reload.automatic: false
#
# How often to check if the pipeline configuration has changed (in seconds)
# Note that the unit value (s) is required. Values without a qualifier (e.g. 60) 
# are treated as nanoseconds.
# Setting the interval this way is not recommended and might change in later versions.
#
# config.reload.interval: 3s
#
# Show fully compiled configuration as debug log message
# NOTE: --log.level must be 'debug'
#
# config.debug: false
#
# When enabled, process escaped characters such as \n and \" in strings in the
# pipeline configuration files.
#
# config.support_escapes: false
#
# ------------ API Settings -------------
# Define settings related to the HTTP API here.
#
# The HTTP API is enabled by default. It can be disabled, but features that rely
# on it will not work as intended.
#
# api.enabled: true
#
# By default, the HTTP API is not secured and is therefore bound to only the
# host's loopback interface, ensuring that it is not accessible to the rest of
# the network.
# When secured with SSL and Basic Auth, the API is bound to _all_ interfaces
# unless configured otherwise.
#
# api.http.host: 127.0.0.1
#
# The HTTP API web server will listen on an available port from the given range.
# Values can be specified as a single port (e.g., `9600`), or an inclusive range
# of ports (e.g., `9600-9700`).
#
# api.http.port: 9600-9700
#
# The HTTP API includes a customizable "environment" value in its response,
# which can be configured here.
#
# api.environment: "production"
#
# The HTTP API can be secured with SSL (TLS). To do so, you will need to provide
# the path to a password-protected keystore in p12 or jks format, along with credentials.
#
# api.ssl.enabled: false
# api.ssl.keystore.path: /path/to/keystore.jks
# api.ssl.keystore.password: "y0uRp4$$w0rD"
#
# The HTTP API can be configured to require authentication. Acceptable values are
#  - `none`:  no auth is required (default)
#  - `basic`: clients must authenticate with HTTP Basic auth, as configured
#             with `api.auth.basic.*` options below
# api.auth.type: none
#
# When configured with `api.auth.type` `basic`, you must provide the credentials
# that requests will be validated against. Usage of Environment or Keystore
# variable replacements is encouraged (such as the value `"${HTTP_PASS}"`, which
# resolves to the value stored in the keystore's `HTTP_PASS` variable if present
# or the same variable from the environment)
#
# api.auth.basic.username: "logstash-user"
# api.auth.basic.password: "s3cUreP4$$w0rD"
#
# ------------ Module Settings ---------------
# Define modules here.  Modules definitions must be defined as an array.
# The simple way to see this is to prepend each `name` with a `-`, and keep
# all associated variables under the `name` they are associated with, and
# above the next, like this:
#
# modules:
#   - name: MODULE_NAME
#     var.PLUGINTYPE1.PLUGINNAME1.KEY1: VALUE
#     var.PLUGINTYPE1.PLUGINNAME1.KEY2: VALUE
#     var.PLUGINTYPE2.PLUGINNAME1.KEY1: VALUE
#     var.PLUGINTYPE3.PLUGINNAME3.KEY1: VALUE
#
# Module variable names must be in the format of
#
# var.PLUGIN_TYPE.PLUGIN_NAME.KEY
#
# modules:
#
# ------------ Cloud Settings ---------------
# Define Elastic Cloud settings here.
# Format of cloud.id is a base64 value e.g. dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRub3RhcmVhbCRpZGVudGlmaWVy
# and it may have an label prefix e.g. staging:dXMtZ...
# This will overwrite 'var.elasticsearch.hosts' and 'var.kibana.host'
# cloud.id: <identifier>
#
# Format of cloud.auth is: <user>:<pass>
# This is optional
# If supplied this will overwrite 'var.elasticsearch.username' and 'var.elasticsearch.password'
# If supplied this will overwrite 'var.kibana.username' and 'var.kibana.password'
# cloud.auth: elastic:<password>
#
# ------------ Queuing Settings --------------
#
# Internal queuing model, "memory" for legacy in-memory based queuing and
# "persisted" for disk-based acked queueing. Defaults is memory
#
# queue.type: memory
#
# If using queue.type: persisted, the directory path where the data files will be stored.
# Default is path.data/queue
#
# path.queue:
#
# If using queue.type: persisted, the page data files size. The queue data consists of
# append-only data files separated into pages. Default is 64mb
#
# queue.page_capacity: 64mb
#
# If using queue.type: persisted, the maximum number of unread events in the queue.
# Default is 0 (unlimited)
#
# queue.max_events: 0
#
# If using queue.type: persisted, the total capacity of the queue in number of bytes.
# If you would like more unacked events to be buffered in Logstash, you can increase the
# capacity using this setting. Please make sure your disk drive has capacity greater than
# the size specified here. If both max_bytes and max_events are specified, Logstash will pick
# whichever criteria is reached first
# Default is 1024mb or 1gb
#
# queue.max_bytes: 1024mb
#
# If using queue.type: persisted, the maximum number of acked events before forcing a checkpoint
# Default is 1024, 0 for unlimited
#
# queue.checkpoint.acks: 1024
#
# If using queue.type: persisted, the maximum number of written events before forcing a checkpoint
# Default is 1024, 0 for unlimited
#
# queue.checkpoint.writes: 1024
#
# If using queue.type: persisted, the interval in milliseconds when a checkpoint is forced on the head page
# Default is 1000, 0 for no periodic checkpoint.
#
# queue.checkpoint.interval: 1000
#
# ------------ Dead-Letter Queue Settings --------------
# Flag to turn on dead-letter queue.
#
# dead_letter_queue.enable: false

# If using dead_letter_queue.enable: true, the maximum size of each dead letter queue. Entries
# will be dropped if they would increase the size of the dead letter queue beyond this setting.
# Default is 1024mb
# dead_letter_queue.max_bytes: 1024mb

# If using dead_letter_queue.enable: true, the interval in milliseconds where if no further events eligible for the DLQ
# have been created, a dead letter queue file will be written. A low value here will mean that more, smaller, queue files
# may be written, while a larger value will introduce more latency between items being "written" to the dead letter queue, and
# being available to be read by the dead_letter_queue input when items are are written infrequently.
# Default is 5000.
#
# dead_letter_queue.flush_interval: 5000

# If using dead_letter_queue.enable: true, the directory path where the data files will be stored.
# Default is path.data/dead_letter_queue
#
# path.dead_letter_queue:
#
# ------------ Debugging Settings --------------
#
# Options for log.level:
#   * fatal
#   * error
#   * warn
#   * info (default)
#   * debug
#   * trace
#
# log.level: info
path.logs: /var/log/logstash
#
# ------------ Other Settings --------------
#
# Where to find custom plugins
# path.plugins: []
#
# Flag to output log lines of each pipeline in its separate log file. Each log filename contains the pipeline.name
# Default is false
# pipeline.separate_logs: false
#
# ------------ X-Pack Settings (not applicable for OSS build)--------------
#
# X-Pack Monitoring
# https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html
#xpack.monitoring.enabled: true
#xpack.monitoring.elasticsearch.username: logstash_system
#xpack.monitoring.elasticsearch.password: xVukNHmGgjThw6svkUpR
#xpack.monitoring.elasticsearch.proxy: ["http://proxy:port"]
#xpack.monitoring.elasticsearch.hosts: ["https://es1:9200", "https://es2:9200"]
# an alternative to hosts + username/password settings is to use cloud_id/cloud_auth
#xpack.monitoring.elasticsearch.cloud_id: monitoring_cluster_id:xxxxxxxxxx
#xpack.monitoring.elasticsearch.cloud_auth: logstash_system:password
# another authentication alternative is to use an Elasticsearch API key
#xpack.monitoring.elasticsearch.api_key: "id:api_key"
#xpack.monitoring.elasticsearch.ssl.certificate_authority: [ "/path/to/ca.crt" ]
#xpack.monitoring.elasticsearch.ssl.truststore.path: path/to/file
#xpack.monitoring.elasticsearch.ssl.truststore.password: password
#xpack.monitoring.elasticsearch.ssl.keystore.path: /path/to/file
#xpack.monitoring.elasticsearch.ssl.keystore.password: password
#xpack.monitoring.elasticsearch.ssl.verification_mode: none
#xpack.monitoring.elasticsearch.sniffing: false
#xpack.monitoring.collection.interval: 10s
#xpack.monitoring.collection.pipeline.details.enabled: true
#
# X-Pack Management
# https://www.elastic.co/guide/en/logstash/current/logstash-centralized-pipeline-management.html
#xpack.management.enabled: false
#xpack.management.pipeline.id: ["main", "apache_logs"]
#xpack.management.elasticsearch.username: logstash_admin_user
#xpack.management.elasticsearch.password: password
#xpack.management.elasticsearch.proxy: ["http://proxy:port"]
#xpack.management.elasticsearch.hosts: ["https://es1:9200", "https://es2:9200"]
# an alternative to hosts + username/password settings is to use cloud_id/cloud_auth
#xpack.management.elasticsearch.cloud_id: management_cluster_id:xxxxxxxxxx
#xpack.management.elasticsearch.cloud_auth: logstash_admin_user:password
# another authentication alternative is to use an Elasticsearch API key
#xpack.management.elasticsearch.api_key: "id:api_key"
#xpack.management.elasticsearch.ssl.certificate_authority: [ "/path/to/ca.crt" ]
#xpack.management.elasticsearch.ssl.truststore.path: /path/to/file
#xpack.management.elasticsearch.ssl.truststore.password: password
#xpack.management.elasticsearch.ssl.keystore.path: /path/to/file
#xpack.management.elasticsearch.ssl.keystore.password: password
#xpack.management.elasticsearch.ssl.verification_mode: certificate
#xpack.management.elasticsearch.sniffing: false
#xpack.management.logstash.poll_interval: 5s

# X-Pack GeoIP plugin
# https://www.elastic.co/guide/en/logstash/current/plugins-filters-geoip.html#plugins-filters-geoip-manage_update
#xpack.geoip.download.endpoint: "https://geoip.elastic.co/v1/database"

ls -l /etc/logstash/conf.d

total 12
-rw-r--r-- 1 root root 181 Mar  8 23:07 02-beats-input.conf
-rw-r--r-- 1 root root 436 Jan 12 18:06 30-elasticsearch-output.conf
-rw-r--r-- 1 root root 404 Dec  4 10:28 30-elasticsearch-output.conf_12_jan_bkp

Thanks,
Ravi

So your configuration with pipelines.yml means logstash concatenate those .conf two files together and run them ...

-rw-r--r-- 1 root root 181 Mar  8 23:07 02-beats-input.conf
-rw-r--r-- 1 root root 436 Jan 12 18:06 30-elasticsearch-output.conf

Are you sure those are configured correctly?

Above you appear to show a single file so I don't know what you're actually trying to run, but those will be the files that are run by log stash

Yes, those are correctly configured. Seeing from the history, we have never faced the issue that the logs in the Discover tab (Kibana GUI) haven not shown up for a single instance and they were shown all the time once someone access the Discover and grep for the logs.

e.g. as below ss

Also I am attaching the details of other configuration file as well.

ls -l

total 12
-rw-r--r-- 1 root root 181 Mar  8 23:07 02-beats-input.conf
-rw-r--r-- 1 root root 436 Jan 12 18:06 30-elasticsearch-output.conf
-rw-r--r-- 1 root root 404 Dec  4 10:28 30-elasticsearch-output.conf_12_jan_bkp

cat 30-elasticsearch-output.conf

output {
  if [@metadata][pipeline] {
        elasticsearch {
        hosts => ["localhost:9200"]
        manage_template => false
        index => "%{[@metadata][beat]}-%{[@metadata][version]}"
        pipeline => "%{[@metadata][pipeline]}"
        }
  } else {
        elasticsearch {
        hosts => ["localhost:9200"]
        manage_template => false
        index => "%{[@metadata][beat]}-%{[@metadata][version]}"
        }
  }
}

Just FYI from the previous discussions on where we updated the 30-elasticsearch-output.conf

What version of Elastic and logstash now?

But also there's no authentication in that output conf file so how do you expect it to log into Elastic search.

Which is the error you're getting.

In addition Those settings would be outdated with 8.x

See this page

Notice the action equals create

But none of this matters if i logstash cannot connect the elasticsearch.

So I am confused @Ravi_Pattar

The files in that confd.d directory is what being run... According to the configuration you just showed.

But above you're showing us other files which have authentication but are not being used Those are not being used.

The files in the conf.d directory do not have authentication
Which lines up with the error you're seeing.

So I'm confused what you're trying to show us and what you're trying to run.

Hello,

Please find the details. The logstash and elasticsearch version are as mentioned below. We have not upgraded this node to 8.x or higher because still its under the discussion. So, the current node is on 7.17 only and is not on 8.x.


# ./logstash --version
Using bundled JDK: /usr/share/logstash/jdk
logstash 7.17.15

# ./elasticsearch --version
Version: 7.17.15, Build: default/deb/0b8ecfb4378335f4689c4223d1f1115f16bef3ba/2023-11-10T22:03:46.987399016Z, JVM: 21.0.1

The last change what we did on this node is setting up for the basic security feature.

The followings were the changes made on the below configurations files on this node while we enabled the security.

  1. xpack.security.enabled: true in the elasticsearch.yml

  2. And in kibana.yml

elasticsearch.username: "kibana_system"
elasticsearch.password: "8hZfur7rK139Kz0WEENx"

No other configurations files were touched apart from the above.

For logstash, the authentication was set as below in /etc/logstash/conf.d/02-beats-input.conf

input {
  beats {
    port => 5044
  }
}

output {
    elasticsearch {
        hosts => [ "http://localhost:9200" ]
        user => "elastic"
        password => "wakfgtqwJYKgYdYsKkE8"
  }
}

The restart of logstash service is not working as the service is not coming up at all and the logs shows as below.

# systemctl status logstash
● logstash.service - logstash
     Loaded: loaded (/etc/systemd/system/logstash.service; enabled; vendor preset: enabled)
     Active: deactivating (stop-sigterm) since Sun 2024-03-10 12:15:28 UTC; 5min ago
   Main PID: 444 (java)
      Tasks: 47 (limit: 18910)
     Memory: 1.2G
        CPU: 7min 50.316s
     CGroup: /system.slice/logstash.service
             └─444 /usr/share/logstash/jdk/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiating>

Mar 10 12:20:14 ip-192-168-135-26 logstash[444]: [2024-03-10T12:20:14,382][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connec>
Mar 10 12:20:14 ip-192-168-135-26 logstash[444]: [2024-03-10T12:20:14,383][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connec>
Mar 10 12:20:19 ip-192-168-135-26 logstash[444]: [2024-03-10T12:20:19,385][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:m>
Mar 10 12:20:19 ip-192-168-135-26 logstash[444]: [2024-03-10T12:20:19,385][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:m>
Mar 10 12:20:19 ip-192-168-135-26 logstash[444]: [2024-03-10T12:20:19,385][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connec>
Mar 10 12:20:19 ip-192-168-135-26 logstash[444]: [2024-03-10T12:20:19,386][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connec>
Mar 10 12:20:24 ip-192-168-135-26 logstash[444]: [2024-03-10T12:20:24,388][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:m>
Mar 10 12:20:24 ip-192-168-135-26 logstash[444]: [2024-03-10T12:20:24,388][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:m>
Mar 10 12:20:24 ip-192-168-135-26 logstash[444]: [2024-03-10T12:20:24,388][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connec>
Mar 10 12:20:24 ip-192-168-135-26 logstash[444]: [2024-03-10T12:20:24,389][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connec>
# tail -n 2 -f /var/log/logstash/logstash-plain.log
[2024-03-10T12:22:34,469][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connect to localhost:9200 [localhost/127.0.0.1] failed: Connection refused (Connection refused)"}
[2024-03-10T12:22:34,470][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: 

Please let me know if anything I have missed out because of which we are facing the challenges now.

Thanks,
Ravi

Hi @Ravi_Pattar

Ok thanks still 7.17

As I tried to explain earlier both the files in the conf.d. or concatenated together in memory so the output one is being run as well.

As an experiment rename or temporary remove

30-elasticsearch-output.conf

That .conf file is being ran as well.

Also, each time you run we're getting different errors authentication error last time socket this time it's really hard to tell what's being.

Why don't we try running in the foreground ones.

Stop logstash.

Then try running in the foreground

/usr/share/logstash/bin/logstash --path.settings /etc/logstash -f /etc/logstash/conf.d/02-beats-input.conf

You can also add the stdout output at the end in the output section.

The logs will be easy to see
You should see what pipeline is run along with other logs

Hello @stephenb

I have run the procedure as per your instructions.

After issuing the command to stop the logstash. I could see the status in freeze. Also it is not stopping completely even after waiting for more than 4 - 5 minutes.

[2024-03-10T15:43:40,396][WARN ][logstash.runner          ] SIGTERM received. Shutting down.
[2024-03-10T15:43:43,868][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'http://localhost:9200/'"}
# date && systemctl status logstash
Sun Mar 10 15:48:19 UTC 2024
● logstash.service - logstash
     Loaded: loaded (/etc/systemd/system/logstash.service; enabled; vendor preset: enabled)
     Active: deactivating (stop-sigterm) since Sun 2024-03-10 15:43:40 UTC; 4min 38s ago
   Main PID: 442 (java)
# date && systemctl status logstash
Sun Mar 10 15:50:00 UTC 2024
● logstash.service - logstash
     Loaded: loaded (/etc/systemd/system/logstash.service; enabled; vendor preset: enabled)
     Active: deactivating (stop-sigterm) since Sun 2024-03-10 15:43:40 UTC; 6min ago

I proceeded with backing up the files and moving them to the other path.

# mkdir /var/ravi_logstash
root@ip-192-168-135-26:/etc/logstash/conf.d# mv 30-elasticsearch-output.conf* /var/ravi_logstash/
root@ip-192-168-135-26:/etc/logstash/conf.d# ls -lrth
total 4.0K
-rw-r--r-- 1 root root 192 Mar 10 12:14 02-beats-input.conf

And then also made the changes by adding stdout.

# cat 02-beats-input.conf 
input {
  beats {
    port => 5044
  }
}

output {
    elasticsearch {
        hosts => [ "http://localhost:9200" ]
        user => "elastic"
        password => "wakfgtqwJYKgYdYsKkE8"
  }
  stdout {}
}

After that I run the procedure and could see as below.

# /usr/share/logstash/bin/logstash --path.settings /etc/logstash -f /etc/logstash/conf.d/02-beats-input.conf
Using bundled JDK: /usr/share/logstash/jdk
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2024-03-10T15:51:10,548][INFO ][logstash.runner          ] Log4j configuration path used is: /etc/logstash/log4j2.properties
[2024-03-10T15:51:10,558][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.17.15", "jruby.version"=>"jruby 9.2.20.1 (2.5.8) 2021-11-30 2a2962fbd1 OpenJDK 64-Bit Server VM 11.0.20+8 on 11.0.20+8 +indy +jit [linux-aarch64]"}
[2024-03-10T15:51:10,561][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djdk.io.File.enableADS=true, -Djruby.compile.invokedynamic=true, -Djruby.jit.threshold=0, -Djruby.regexp.interruptible=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true]
[2024-03-10T15:51:10,878][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2024-03-10T15:51:10,886][FATAL][logstash.runner          ] Logstash could not be started because there is already another instance using the configured data directory.  If you wish to run multiple instances, you must change the "path.data" setting.
[2024-03-10T15:51:10,889][FATAL][org.logstash.Logstash    ] Logstash stopped processing because of an error: (SystemExit) exit
org.jruby.exceptions.SystemExit: (SystemExit) exit
        at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:747) ~[jruby-complete-9.2.20.1.jar:?]
        at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:710) ~[jruby-complete-9.2.20.1.jar:?]
        at usr.share.logstash.lib.bootstrap.environment.<main>(/usr/share/logstash/lib/bootstrap/environment.rb:94) ~[?:?]

Note: I tried killing the logstash forcefully and then tried to re-run the procedure.

root        2488  0.0  0.0  10172  3200 pts/1    S+   15:43   0:00 systemctl stop logstash
root        2629  0.0  0.0   5632  1536 pts/3    S+   15:49   0:00 tail -f /var/log/logstash/logstash-plain.log
root        2835  0.0  0.0   6416  1920 pts/5    S+   16:03   0:00 grep --color=auto logstash
root@ip-192-168-135-26:/etc/logstash# kill -9 2488

Still, I don't see that the Logstash is stopping completely.

root@ip-192-168-135-26:/var/ravi_logstash# systemctl status logstash
● logstash.service - logstash
     Loaded: loaded (/etc/systemd/system/logstash.service; enabled; vendor preset: enabled)
     Active: deactivating (stop-sigterm) since Sun 2024-03-10 15:43:40 UTC; 22min ago
# /usr/share/logstash/bin/logstash --path.settings /etc/logstash -f /etc/logstash/conf.d/02-beats-input.conf
Using bundled JDK: /usr/share/logstash/jdk
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2024-03-10T16:09:46,343][INFO ][logstash.runner          ] Log4j configuration path used is: /etc/logstash/log4j2.properties
[2024-03-10T16:09:46,353][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.17.15", "jruby.version"=>"jruby 9.2.20.1 (2.5.8) 2021-11-30 2a2962fbd1 OpenJDK 64-Bit Server VM 11.0.20+8 on 11.0.20+8 +indy +jit [linux-aarch64]"}
[2024-03-10T16:09:46,356][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djdk.io.File.enableADS=true, -Djruby.compile.invokedynamic=true, -Djruby.jit.threshold=0, -Djruby.regexp.interruptible=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true]
[2024-03-10T16:09:46,646][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2024-03-10T16:09:46,654][FATAL][logstash.runner          ] Logstash could not be started because there is already another instance using the configured data directory.  If you wish to run multiple instances, you must change the "path.data" setting.
[2024-03-10T16:09:46,656][FATAL][org.logstash.Logstash    ] Logstash stopped processing because of an error: (SystemExit) exit
org.jruby.exceptions.SystemExit: (SystemExit) exit
        at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:747) ~[jruby-complete-9.2.20.1.jar:?]
        at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:710) ~[jruby-complete-9.2.20.1.jar:?]
        at usr.share.logstash.lib.bootstrap.environment.<main>(/usr/share/logstash/lib/bootstrap/environment.rb:94) ~[?:?]

Thanks,
Ravi

That did not try to kill logstash, that command killed systemctl while it was trying to stop logstash so now logstash may be in an inconsistent state.

perhaps try

ps -efl | grep logstash

and try to kill the actual logstash process and try this again

/usr/share/logstash/bin/logstash --path.settings /etc/logstash -f /etc/logstash/conf.d/02-beats-input.conf

Make sure no logstash is running stop the tail process as well

Hello,

Yes, you're right.

This time I have made sure that I have killed the right process for LS. And I have re-run the procedure.

[2024-03-10T17:31:39,739][INFO ][logstash.runner          ] Log4j configuration path used is: /etc/logstash/log4j2.properties
[2024-03-10T17:31:39,750][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.17.15", "jruby.version"=>"jruby 9.2.20.1 (2.5.8) 2021-11-30 2a2962fbd1 OpenJDK 64-Bit Server VM 11.0.20+8 on 11.0.20+8 +indy +jit [linux-aarch64]"}
[2024-03-10T17:31:39,752][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djdk.io.File.enableADS=true, -Djruby.compile.invokedynamic=true, -Djruby.jit.threshold=0, -Djruby.regexp.interruptible=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true]
[2024-03-10T17:31:40,078][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2024-03-10T17:31:41,285][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2024-03-10T17:31:42,553][INFO ][org.reflections.Reflections] Reflections took 77 ms to scan 1 urls, producing 119 keys and 419 values
[2024-03-10T17:31:43,873][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200"]}
[2024-03-10T17:31:44,217][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx@localhost:9200/]}}
[2024-03-10T17:31:44,592][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://elastic:xxxxxx@localhost:9200/"}
[2024-03-10T17:31:44,610][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (7.17.15) {:es_version=>7}
[2024-03-10T17:31:44,612][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2024-03-10T17:31:44,692][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2024-03-10T17:31:44,696][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2024-03-10T17:31:44,756][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
[2024-03-10T17:31:44,795][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/etc/logstash/conf.d/02-beats-input.conf"], :thread=>"#<Thread:0x6f316641 run>"}
[2024-03-10T17:31:45,814][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>1.02}
[2024-03-10T17:31:45,841][INFO ][logstash.inputs.beats    ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2024-03-10T17:31:45,858][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2024-03-10T17:31:45,914][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2024-03-10T17:31:46,004][INFO ][org.logstash.beats.Server][main][e1517072bc1c170a9255668f7f2c7392f9c03243d0e066f00f4fdd0a0e226a6f] Starting server on port: 5044

But still I could see that LS service is not coming up.

Those logs absolutely show that logstash is up and running and listening on port 5044.

That last log shows it's sitting and waiting and listening on port 5044.

It is running in the foreground, not as a service... Do you know the difference between those two?

But in short logstash is absolutely running in the foreground, not as a service.

That is exactly what those logs show.

So leave logstash running and now run your beats.

With the stdout any logs that come in will be printed to the console.

So if you see nothing in the console, it means the beats is not sending anything.

Also you can see if there's any error.

Hello,

I have re-run the procedure and by running the filebeat. Please find the details.

From the console where filebeat was run.

# ./filebeat -e -c /etc/filebeat/filebeat.yml  
2024-03-11T11:59:18.346Z        INFO    instance/beat.go:698    Home path: [/usr/share/filebeat/bin] Config path: [/usr/share/filebeat/bin] Data path: [/usr/share/filebeat/bin/data] Logs path: [/usr/share/filebeat/bin/logs] Hostfs Path: [/]
2024-03-11T11:59:18.347Z        INFO    instance/beat.go:706    Beat ID: 707f41b3-e26d-43ec-9ed2-4b7e0e5cde11
2024-03-11T11:59:18.350Z        INFO    [seccomp]       seccomp/seccomp.go:124  Syscall filter successfully installed
2024-03-11T11:59:18.350Z        INFO    [beat]  instance/beat.go:1052   Beat info       {"system_info": {"beat": {"path": {"config": "/usr/share/filebeat/bin", "data": "/usr/share/filebeat/bin/data", "home": "/usr/share/filebeat/bin", "logs": "/usr/share/filebeat/bin/logs"}, "type": "filebeat", "uuid": "707f41b3-e26d-43ec-9ed2-4b7e0e5cde11"}}}
2024-03-11T11:59:18.350Z        INFO    [beat]  instance/beat.go:1061   Build info      {"system_info": {"build": {"commit": "b474d2803ed2961f23f614d7213d9099fb0b4354", "libbeat": "7.17.15", "time": "2023-11-08T19:08:34.000Z", "version": "7.17.15"}}}
2024-03-11T11:59:18.350Z        INFO    [beat]  instance/beat.go:1064   Go runtime info {"system_info": {"go": {"os":"linux","arch":"arm64","max_procs":4,"version":"go1.20.10"}}}
2024-03-11T11:59:18.350Z        INFO    [beat]  instance/beat.go:1070   Host info       {"system_info": {"host": {"architecture":"aarch64","boot_time":"2024-03-10T12:26:23Z","containerized":false,"name":"ip-192-168-135-26","ip":["127.0.0.1","::1","192.168.135.26","fe80::82:8eff:febe:de2f"],"kernel_version":"6.5.0-1014-aws","mac":["02:82:8e:be:de:2f"],"os":{"type":"linux","family":"debian","platform":"ubuntu","name":"Ubuntu","version":"22.04.3 LTS (Jammy Jellyfish)","major":22,"minor":4,"patch":3,"codename":"jammy"},"timezone":"UTC","timezone_offset_sec":0,"id":"ec277ce87f3e8f679076ec2660e95eb9"}}}
2024-03-11T11:59:18.350Z        INFO    [add_cloud_metadata]    add_cloud_metadata/add_cloud_metadata.go:105    add_cloud_metadata: hosting provider type detected as aws, metadata={"cloud":{"account":{"id":"824252814486"},"availability_zone":"eu-central-1a","image":{"id":"ami-01b38e1e1208d64fe"},"instance":{"id":"i-066090f6436249598"},"machine":{"type":"t4g.xlarge"},"provider":"aws","region":"eu-central-1","service":{"name":"EC2"}}}
2024-03-11T11:59:18.350Z        INFO    [beat]  instance/beat.go:1099   Process info    {"system_info": {"process": {"capabilities": {"inheritable":null,"permitted":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read","38","39","40"],"effective":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read","38","39","40"],"bounding":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read","38","39","40"],"ambient":null}, "cwd": "/usr/share/filebeat/bin", "exe": "/usr/share/filebeat/bin/filebeat", "name": "filebeat", "pid": 10120, "ppid": 9958, "seccomp": {"mode":"filter","no_new_privs":true}, "start_time": "2024-03-11T11:59:18.110Z"}}}
2024-03-11T11:59:18.351Z        INFO    instance/beat.go:292    Setup Beat: filebeat; Version: 7.17.15
2024-03-11T11:59:18.352Z        INFO    [publisher]     pipeline/module.go:113  Beat name: ip-192-168-135-26
2024-03-11T11:59:18.353Z        ERROR   [modules]       fileset/modules.go:142  Not loading modules. Module directory not found: /usr/share/filebeat/bin/module
2024-03-11T11:59:18.353Z        WARN    beater/filebeat.go:202  Filebeat is unable to load the ingest pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the ingest pipelines or are using Logstash pipelines, you can ignore this warning.
2024-03-11T11:59:18.354Z        INFO    [monitoring]    log/log.go:142  Starting metrics logging every 30s
2024-03-11T11:59:18.354Z        INFO    instance/beat.go:457    filebeat start running.
2024-03-11T11:59:18.359Z        INFO    memlog/store.go:119     Loading data file of '/usr/share/filebeat/bin/data/registry/filebeat' succeeded. Active transaction id=0
2024-03-11T11:59:18.359Z        INFO    memlog/store.go:124     Finished loading transaction log file for '/usr/share/filebeat/bin/data/registry/filebeat'. Active transaction id=0
2024-03-11T11:59:18.360Z        WARN    beater/filebeat.go:411  Filebeat is unable to load the ingest pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the ingest pipelines or are using Logstash pipelines, you can ignore this warning.
2024-03-11T11:59:18.360Z        INFO    [registrar]     registrar/registrar.go:109      States Loaded from registrar: 0
2024-03-11T11:59:18.361Z        INFO    [crawler]       beater/crawler.go:71    Loading Inputs: 1
2024-03-11T11:59:18.361Z        INFO    [crawler]       beater/crawler.go:117   starting input, keys present on the config: [filebeat.inputs.0.enabled filebeat.inputs.0.id filebeat.inputs.0.paths.0 filebeat.inputs.0.type]
2024-03-11T11:59:18.362Z        INFO    [crawler]       beater/crawler.go:121   input disabled, skipping it
2024-03-11T11:59:18.362Z        INFO    [crawler]       beater/crawler.go:106   Loading and starting Inputs completed. Enabled inputs: 0
2024-03-11T11:59:18.362Z        INFO    cfgfile/reload.go:164   Config reloader started
2024-03-11T11:59:18.362Z        INFO    cfgfile/reload.go:224   Loading of config files completed.
2024-03-11T11:59:48.358Z        INFO    [monitoring]    log/log.go:184  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cgroup":{"cpu":{"id":"session-42.scope"},"memory":{"id":"session-42.scope","mem":{"usage":{"bytes":38076416}}}},"cpu":{"system":{"ticks":30,"time":{"ms":36}},"total":{"ticks":120,"time":{"ms":134},"value":120},"user":{"ticks":90,"time":{"ms":98}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":10},"info":{"ephemeral_id":"695cbf63-28ea-4ac8-9a07-3c4797e2eca1","uptime":{"ms":30113},"version":"7.17.15"},"memstats":{"gc_next":19447304,"memory_alloc":9577200,"memory_sys":36549896,"memory_total":55019832,"rss":80650240},"runtime":{"goroutines":27}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0},"reloads":1,"scans":1},"output":{"events":{"active":0},"type":"logstash"},"pipeline":{"clients":0,"events":{"active":0},"queue":{"max_events":4096}}},"registrar":{"states":{"current":0}},"system":{"cpu":{"cores":4},"load":{"1":1.63,"15":0.41,"5":0.48,"norm":{"1":0.4075,"15":0.1025,"5":0.12}}}}}}

I could see as below from the console where Logstash was run as per the instructions above.

# /usr/share/logstash/bin/logstash --path.settings /etc/logstash -f /etc/logstash/conf.d/02-beats-input.conf
Using bundled JDK: /usr/share/logstash/jdk
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2024-03-11T11:59:16,814][INFO ][logstash.runner          ] Log4j configuration path used is: /etc/logstash/log4j2.properties
[2024-03-11T11:59:16,824][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.17.15", "jruby.version"=>"jruby 9.2.20.1 (2.5.8) 2021-11-30 2a2962fbd1 OpenJDK 64-Bit Server VM 11.0.20+8 on 11.0.20+8 +indy +jit [linux-aarch64]"}
[2024-03-11T11:59:16,827][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djdk.io.File.enableADS=true, -Djruby.compile.invokedynamic=true, -Djruby.jit.threshold=0, -Djruby.regexp.interruptible=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true]
[2024-03-11T11:59:17,146][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2024-03-11T11:59:18,424][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2024-03-11T11:59:19,677][INFO ][org.reflections.Reflections] Reflections took 80 ms to scan 1 urls, producing 119 keys and 419 values 
[2024-03-11T11:59:20,972][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200"]}
[2024-03-11T11:59:21,312][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx@localhost:9200/]}}
[2024-03-11T11:59:21,687][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://elastic:xxxxxx@localhost:9200/"}
[2024-03-11T11:59:21,707][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (7.17.15) {:es_version=>7}
[2024-03-11T11:59:21,710][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2024-03-11T11:59:21,788][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2024-03-11T11:59:21,789][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2024-03-11T11:59:21,842][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
[2024-03-11T11:59:21,875][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/etc/logstash/conf.d/02-beats-input.conf"], :thread=>"#<Thread:0x1c83d84a run>"}
[2024-03-11T11:59:22,689][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.81}
[2024-03-11T11:59:22,726][INFO ][logstash.inputs.beats    ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2024-03-11T11:59:22,765][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2024-03-11T11:59:22,848][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2024-03-11T11:59:22,895][INFO ][org.logstash.beats.Server][main][e1517072bc1c170a9255668f7f2c7392f9c03243d0e066f00f4fdd0a0e226a6f] Starting server on port: 5044
{
       "message" => "\t",
    "@timestamp" => 2024-03-11T11:44:10.695Z,
      "@version" => "1",
         "input" => {
        "type" => "log"
    },
          "tags" => [
        [0] "cy-enswitch",
        [1] "cy-enswitch-asterisk",
        [2] "a37.cy2",
        [3] "beats_input_codec_plain_applied"
    ],

Thanks,
Ravi

So that looks like if filebeat to logstash to elasticsearch is working with the exception

But That shows there was nothing to read in that file but a tab....

Are there any more new lines to read in that file?

You should be able to find that entry and discover assuming the time range is correct.

It looks Like it's working but you need more entries in the file that's being monitored by filebeat or there's some issue on the file beat side.