Logstash not able to create index

Hi All,
My logstash file is not able to create the Index pattern in Kibana. My logstash.conf file /etc/logstash/conf.d/logstash.conf

<

input {
  beats {
    port => 5044
    type => syslog
    ssl_certificate => "/etc/ssl/logstash_frwrd.crt"
    ssl_key => "/etc/ssl/logstash-forwarder.key"
  }
}

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

output {
  elasticsearch {
    hosts => ["localhost:9200"]
    index => ["waf"]
    #index => ["waf-(date '+%Y-%m-%d_%H-%M-%S')"]
  }
  stdout { codec => rubydebug }
}

/>
Looking forward for reply.

Thanks
Sumit

What does stdout (the rubydebug output) look like?

What does the logstash log look like?

Thanks @Badger for your reply.
My logstash.yml file
<

# Settings file in YAML
#
# Settings can be specified either in hierarchical form, e.g.:
#
#   pipeline:
#     batch:
#       size: 125
#       delay: 5
#
# Or as flat keys:
#
#   pipeline.batch.size: 125
#   pipeline.batch.delay: 5
#
# ------------  Node identity ------------
#
# Use a descriptive name for the node:
#
  node.name: centos-0217.novalocal
#
# If omitted the node name will default to the machine's host name
#
# ------------ Data path ------------------
#
# Which directory should be used by logstash and its plugins
# for any persistent needs. Defaults to LOGSTASH_HOME/data
#
path.data: /var/lib/logstash
#
# ------------ Pipeline Settings --------------
#
# The ID of the pipeline.
#
# pipeline.id: main
#
# Set the number of workers that will, in parallel, execute the filters+outputs
# stage of the pipeline.
#
# This defaults to the number of the host's CPU cores.
#
# pipeline.workers: 2
#
# How many events to retrieve from inputs before sending to filters+workers
#
# pipeline.batch.size: 125
#
# How long to wait in milliseconds while polling for the next event
# before dispatching an undersized batch to filters+outputs
#
# pipeline.batch.delay: 50
#
# Force Logstash to exit during shutdown even if there are still inflight
# events in memory. By default, logstash will refuse to quit until all
# received events have been pushed to the outputs.
#
# WARNING: enabling this can lead to data loss during shutdown
#
# pipeline.unsafe_shutdown: false
#

# Set the pipeline event ordering. Options are "auto" (the default), "true" or "false".
# "auto" will  automatically enable ordering if the 'pipeline.workers' setting
# is also set to '1'.
# "true" will enforce ordering on the pipeline and prevent logstash from starting
# if there are multiple workers.
# "false" will disable any extra processing necessary for preserving ordering.
#
pipeline.ordered: auto
#
# ------------ Pipeline Configuration Settings --------------
#
# Where to fetch the pipeline configuration for the main pipeline
#
# path.config:
#
# Pipeline configuration string for the main pipeline
#
# config.string:
#
# At startup, test if the configuration is valid and exit (dry run)
#
# config.test_and_exit: false
#
# Periodically check if the configuration has changed and reload the pipeline
# This can also be triggered manually through the SIGHUP signal
#
# config.reload.automatic: false
#
# How often to check if the pipeline configuration has changed (in seconds)
# Note that the unit value (s) is required. Values without a qualifier (e.g. 60)
# are treated as nanoseconds.
# Setting the interval this way is not recommended and might change in later versions.
#
# config.reload.interval: 3s
#
# Show fully compiled configuration as debug log message
# NOTE: --log.level must be 'debug'
#
# config.debug: false
#
# When enabled, process escaped characters such as \n and \" in strings in the
# pipeline configuration files.
#
# config.support_escapes: false
#
# ------------ Module Settings ---------------
# Define modules here.  Modules definitions must be defined as an array.
# The simple way to see this is to prepend each `name` with a `-`, and keep
# all associated variables under the `name` they are associated with, and
# above the next, like this:
#
# modules:
#   - name: MODULE_NAME
#     var.PLUGINTYPE1.PLUGINNAME1.KEY1: VALUE
#     var.PLUGINTYPE1.PLUGINNAME1.KEY2: VALUE
#     var.PLUGINTYPE2.PLUGINNAME1.KEY1: VALUE
#     var.PLUGINTYPE3.PLUGINNAME3.KEY1: VALUE
#
# Module variable names must be in the format of
#
# var.PLUGIN_TYPE.PLUGIN_NAME.KEY
#
# modules:
#
# ------------ Cloud Settings ---------------
# Define Elastic Cloud settings here.
# Format of cloud.id is a base64 value e.g. dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRub3RhcmVhbCRpZGVudGlmaWVy
# and it may have an label prefix e.g. staging:dXMtZ...
# This will overwrite 'var.elasticsearch.hosts' and 'var.kibana.host'
# cloud.id: <identifier>
#
# Format of cloud.auth is: <user>:<pass>
# This is optional
# If supplied this will overwrite 'var.elasticsearch.username' and 'var.elasticsearch.password'
# If supplied this will overwrite 'var.kibana.username' and 'var.kibana.password'
# cloud.auth: elastic:<password>
#
# ------------ Queuing Settings --------------
#
# Internal queuing model, "memory" for legacy in-memory based queuing and
# "persisted" for disk-based acked queueing. Defaults is memory
#
# queue.type: memory
#
# If using queue.type: persisted, the directory path where the data files will be stored.
# Default is path.data/queue
#
# path.queue:
#
# If using queue.type: persisted, the page data files size. The queue data consists of
# append-only data files separated into pages. Default is 64mb
#
# queue.page_capacity: 64mb
#
# If using queue.type: persisted, the maximum number of unread events in the queue.
# Default is 0 (unlimited)
#
# queue.max_events: 0
#
# If using queue.type: persisted, the total capacity of the queue in number of bytes.
# If you would like more unacked events to be buffered in Logstash, you can increase the
# capacity using this setting. Please make sure your disk drive has capacity greater than
# the size specified here. If both max_bytes and max_events are specified, Logstash will pick
# whichever criteria is reached first
# Default is 1024mb or 1gb
#
# queue.max_bytes: 1024mb
#
# If using queue.type: persisted, the maximum number of acked events before forcing a checkpoint
# Default is 1024, 0 for unlimited
#
# queue.checkpoint.acks: 1024
#
# If using queue.type: persisted, the maximum number of written events before forcing a checkpoint
# Default is 1024, 0 for unlimited
#
# queue.checkpoint.writes: 1024
#
# If using queue.type: persisted, the interval in milliseconds when a checkpoint is forced on the head page
# Default is 1000, 0 for no periodic checkpoint.
#
# queue.checkpoint.interval: 1000
#
# ------------ Dead-Letter Queue Settings --------------
# Flag to turn on dead-letter queue.
#
# dead_letter_queue.enable: false

# If using dead_letter_queue.enable: true, the maximum size of each dead letter queue. Entries
# will be dropped if they would increase the size of the dead letter queue beyond this setting.
# Default is 1024mb
# dead_letter_queue.max_bytes: 1024mb

# If using dead_letter_queue.enable: true, the directory path where the data files will be stored.
# Default is path.data/dead_letter_queue
#
# path.dead_letter_queue:
#
# ------------ Metrics Settings --------------
#
# Bind address for the metrics REST endpoint
#
  http.host: "172.31.192.3"
#
# Bind port for the metrics REST endpoint, this option also accept a range
# (9600-9700) and logstash will pick up the first available ports.
#
# http.port: 9600-9700
# server.port: 5044
#
# ------------ Debugging Settings --------------
#
# Options for log.level:
#   * fatal
#   * error
#   * warn
#   * info (default)
#   * debug
#   * trace
#
# log.level: info
path.logs: /var/log/logstash
#
# ------------ Other Settings --------------
#
# Where to find custom plugins
# path.plugins: []
#
# Flag to output log lines of each pipeline in its separate log file. Each log filename contains the pipeline.name
# Default is false
# pipeline.separate_logs: false
#
# ------------ X-Pack Settings (not applicable for OSS build)--------------
#
# X-Pack Monitoring
# https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html
#xpack.monitoring.enabled: false
#xpack.monitoring.elasticsearch.username: logstash_system
#xpack.monitoring.elasticsearch.password: password
#xpack.monitoring.elasticsearch.hosts: ["https://es1:9200", "https://es2:9200"]
# an alternative to hosts + username/password settings is to use cloud_id/cloud_auth
#xpack.monitoring.elasticsearch.cloud_id: monitoring_cluster_id:xxxxxxxxxx
#xpack.monitoring.elasticsearch.cloud_auth: logstash_system:password
#xpack.monitoring.elasticsearch.ssl.certificate_authority: [ "/path/to/ca.crt" ]
#xpack.monitoring.elasticsearch.ssl.truststore.path: path/to/file
#xpack.monitoring.elasticsearch.ssl.truststore.password: password
#xpack.monitoring.elasticsearch.ssl.keystore.path: /path/to/file
#xpack.monitoring.elasticsearch.ssl.keystore.password: password
#xpack.monitoring.elasticsearch.ssl.verification_mode: certificate
#xpack.monitoring.elasticsearch.sniffing: false
#xpack.monitoring.collection.interval: 10s
#xpack.monitoring.collection.pipeline.details.enabled: true
#
# X-Pack Management
# https://www.elastic.co/guide/en/logstash/current/logstash-centralized-pipeline-management.html
#xpack.management.enabled: false
#xpack.management.pipeline.id: ["main", "apache_logs"]
#xpack.management.elasticsearch.username: logstash_admin_user
#xpack.management.elasticsearch.password: password
#xpack.management.elasticsearch.hosts: ["https://es1:9200", "https://es2:9200"]
# an alternative to hosts + username/password settings is to use cloud_id/cloud_auth
#xpack.management.elasticsearch.cloud_id: management_cluster_id:xxxxxxxxxx
#xpack.management.elasticsearch.cloud_auth: logstash_admin_user:password
#xpack.management.elasticsearch.ssl.certificate_authority: [ "/path/to/ca.crt" ]
#xpack.management.elasticsearch.ssl.truststore.path: /path/to/file
#xpack.management.elasticsearch.ssl.truststore.password: password
#xpack.management.elasticsearch.ssl.keystore.path: /path/to/file
#xpack.management.elasticsearch.ssl.keystore.password: password
#xpack.management.elasticsearch.ssl.verification_mode: certificate
#xpack.management.elasticsearch.sniffing: false
#xpack.management.logstash.poll_interval: 5s

/>

My logstash.conf file
<

input {
  beats {
    port => 5044
    type => syslog
    ssl_certificate => "/etc/ssl/logstash_frwrd.crt"
    ssl_key => "/etc/ssl/logstash-forwarder.key"
  }
}

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

output {
  elasticsearch {
    hosts => ["localhost:9200"]
    index => ["waf"]
    #index => ["waf-(date '+%Y-%m-%d_%H-%M-%S')"]
  }
  #stdout { codec => rubydebug }
}

/>

Looking forward for your reply.

You did not include either thing that I asked you to post -- rubydebug output and the logstash log.

Thanks @Badger for your reply, providing the log as requested
<

used to determine the document _type {:es_version=>7}
[2020-07-02T08:48:54,581][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2020-07-02T08:48:54,590][INFO ][logstash.outputs.elasticsearch][main] Using default mapping template
[2020-07-02T08:48:54,641][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2020-07-02T08:48:55,083][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][main] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been created for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[2020-07-02T08:48:55,089][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>24, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>3000, "pipeline.sources"=>["/etc/logstash/conf.d/02-beats-input.conf", "/etc/logstash/conf.d/10-syslog-filter.conf", "/etc/logstash/conf.d/30-elasticsearch-output.conf", "/etc/logstash/conf.d/logstash-simple.conf", "/etc/logstash/conf.d/logstash-test.conf", "/etc/logstash/conf.d/logstash.conf", "/etc/logstash/conf.d/logstash_newbk.conf", "/etc/logstash/conf.d/logstash_orig.conf"], :thread=>"#<Thread:0x3c62fe6e run>"}
[2020-07-02T08:48:58,590][ERROR][logstash.javapipeline    ][main] Pipeline aborted due to error {:pipeline_id=>"main", :exception=>#<LogStash::ConfigurationError: Certificate or Certificate Key not configured>, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-beats-6.0.9-java/lib/logstash/inputs/beats.rb:139:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:216:in `block in register_plugins'", "org/jruby/RubyArray.java:1809:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:215:in `register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:327:in `start_inputs'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:287:in `start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:170:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:125:in `block in start'"], "pipeline.sources"=>["/etc/logstash/conf.d/02-beats-input.conf", "/etc/logstash/conf.d/10-syslog-filter.conf", "/etc/logstash/conf.d/30-elasticsearch-output.conf", "/etc/logstash/conf.d/logstash-simple.conf", "/etc/logstash/conf.d/logstash-test.conf", "/etc/logstash/conf.d/logstash.conf", "/etc/logstash/conf.d/logstash_newbk.conf", "/etc/logstash/conf.d/logstash_orig.conf"], :thread=>"#<Thread:0x3c62fe6e run>"}
[2020-07-02T08:48:58,630][ERROR][logstash.agent           ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: false", :backtrace=>nil}
[2020-07-02T08:48:58,974][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2020-07-02T08:49:03,878][INFO ][logstash.runner          ] Logstash shut down.

/>

And output after running ./logstash -f /etc/logstash/conf.d/logstash.conf this command

<

WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2020-07-07 04:06:42.991 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2020-07-07 04:06:43.001 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"7.7.1"}
[INFO ] 2020-07-07 04:06:45.135 [Converge PipelineAction::Create<main>] Reflections - Reflections took 45 ms to scan 1 urls, producing 21 keys and 41 values
[INFO ] 2020-07-07 04:06:47.366 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[WARN ] 2020-07-07 04:06:47.545 [[main]-pipeline-manager] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[INFO ] 2020-07-07 04:06:47.578 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[WARN ] 2020-07-07 04:06:47.670 [[main]-pipeline-manager] LazyDelegatingGauge - A gauge metric of an unknown type (org.jruby.RubyArray) has been created for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[INFO ] 2020-07-07 04:06:47.677 [[main]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>24, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>3000, "pipeline.sources"=>["/etc/logstash/conf.d/logstash.conf"], :thread=>"#<Thread:0x3ea0802e run>"}
[INFO ] 2020-07-07 04:06:49.306 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}
[INFO ] 2020-07-07 04:06:49.335 [[main]<tcp] tcp - Starting tcp input listener {:address=>"0.0.0.0:5044", :ssl_enable=>"false"}
[INFO ] 2020-07-07 04:06:49.401 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2020-07-07 04:06:49.729 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[WARN ] 2020-07-07 04:06:52.599 [Ruby-0-Thread-5: :1] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[WARN ] 2020-07-07 04:06:57.612 [Ruby-0-Thread-5: :1] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[WARN ] 2020-07-07 04:07:02.619 [Ruby-0-Thread-5: :1] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}

/>

After running journalctl --unit logstash

<

-- Logs begin at Thu 2020-07-02 15:11:10 UTC, end at Tue 2020-07-07 09:21:34 UTC. --
Jul 06 17:59:17 centos-0217.novalocal logstash[4179]: Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j
Jul 06 17:59:18 centos-0217.novalocal logstash[4179]: [2020-07-06T17:59:18,344][FATAL][logstash.runner          ] An unexpected error occ
Jul 06 17:59:18 centos-0217.novalocal logstash[4179]: [2020-07-06T17:59:18,460][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateE
Jul 06 17:59:18 centos-0217.novalocal systemd[1]: logstash.service: main process exited, code=exited, status=1/FAILURE
Jul 06 17:59:18 centos-0217.novalocal systemd[1]: Unit logstash.service entered failed state.
Jul 06 17:59:18 centos-0217.novalocal systemd[1]: logstash.service failed.
Jul 06 17:59:18 centos-0217.novalocal systemd[1]: logstash.service holdoff time over, scheduling restart.
Jul 06 17:59:18 centos-0217.novalocal systemd[1]: Stopped logstash.
Jul 06 17:59:18 centos-0217.novalocal systemd[1]: Started logstash.
Jul 06 17:59:34 centos-0217.novalocal logstash[4246]: Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j
Jul 06 17:59:35 centos-0217.novalocal logstash[4246]: [2020-07-06T17:59:35,155][FATAL][logstash.runner          ] An unexpected error occ
Jul 06 17:59:35 centos-0217.novalocal logstash[4246]: [2020-07-06T17:59:35,275][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateE
Jul 06 17:59:35 centos-0217.novalocal systemd[1]: logstash.service: main process exited, code=exited, status=1/FAILURE
Jul 06 17:59:35 centos-0217.novalocal systemd[1]: Unit logstash.service entered failed state.
Jul 06 17:59:35 centos-0217.novalocal systemd[1]: logstash.service failed.
Jul 06 17:59:35 centos-0217.novalocal systemd[1]: logstash.service holdoff time over, scheduling restart.
Jul 06 17:59:35 centos-0217.novalocal systemd[1]: Stopped logstash.
Jul 06 17:59:35 centos-0217.novalocal systemd[1]: Started logstash.
Jul 06 17:59:51 centos-0217.novalocal logstash[4310]: Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j
Jul 06 17:59:52 centos-0217.novalocal logstash[4310]: [2020-07-06T17:59:52,210][FATAL][logstash.runner          ] An unexpected error occ
Jul 06 17:59:52 centos-0217.novalocal logstash[4310]: [2020-07-06T17:59:52,333][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateE
Jul 06 17:59:52 centos-0217.novalocal systemd[1]: logstash.service: main process exited, code=exited, status=1/FAILURE
Jul 06 17:59:52 centos-0217.novalocal systemd[1]: Unit logstash.service entered failed state.
Jul 06 17:59:52 centos-0217.novalocal systemd[1]: logstash.service failed.
Jul 06 17:59:52 centos-0217.novalocal systemd[1]: logstash.service holdoff time over, scheduling restart.
Jul 06 17:59:52 centos-0217.novalocal systemd[1]: Stopped logstash.
Jul 06 17:59:52 centos-0217.novalocal systemd[1]: Started logstash.
Jul 06 18:00:08 centos-0217.novalocal logstash[4373]: Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j
Jul 06 18:00:09 centos-0217.novalocal logstash[4373]: [2020-07-06T18:00:09,427][FATAL][logstash.runner          ] An unexpected error occ
Jul 06 18:00:09 centos-0217.novalocal logstash[4373]: [2020-07-06T18:00:09,593][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateE
Jul 06 18:00:09 centos-0217.novalocal systemd[1]: logstash.service: main process exited, code=exited, status=1/FAILURE
Jul 06 18:00:09 centos-0217.novalocal systemd[1]: Unit logstash.service entered failed state.
Jul 06 18:00:09 centos-0217.novalocal systemd[1]: logstash.service failed.
Jul 06 18:00:09 centos-0217.novalocal systemd[1]: logstash.service holdoff time over, scheduling restart.
Jul 06 18:00:09 centos-0217.novalocal systemd[1]: Stopped logstash.
Jul 06 18:00:09 centos-0217.novalocal systemd[1]: Started logstash.
lines 1-37

/>

Are you aiming to right elasticsearch address?

elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}

According to me, you aim for localhost right now. What is the address of elastic?

Also try to replace

with:

index => "waf"

Thanks @dorinand for your reply, as of now it is running on localhost but I want to change it and provide the host for this so that it will run on http://host_ip:9200.
But whenever I tried to change in elasticsearch.yml file getting issue.
Don't know why it is happening.
Can you please help me out in this.

What is result of: curl localhost:9200 ?
Did you fix the index name from ["waf"] to "waf"?
Why are you tring to change elasticsearch.yml? This is configuration of logstash. Please, do not make a lot of changes at once, it is hard to find solution then.

Thanks @dorinand for your reply, as mentioned output of curl
<

{
  "name" : "centos-0217.novalocal",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "uX7sY67DS1SGpZYlczMLqg",
  "version" : {
    "number" : "7.7.1",
    "build_flavor" : "default",
    "build_type" : "rpm",
    "build_hash" : "ad56dce891c901a492bb1ee393f12dfff473a423",
    "build_date" : "2020-05-28T16:30:01.040088Z",
    "build_snapshot" : false,
    "lucene_version" : "8.5.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

/>

Actually filebeat is running on one of my client machine and hosts in filebeat.yml file is like
<

output.logstash:
#output:
 #The Logstash hosts
  #logstash:
  enabled: true
  #server.port: 5044
  hosts: ["172.31.192.3:5044"]

/>
And after running jurnolctl --unit logstash command I am getting this log

<

-- Logs begin at Thu 2020-07-02 15:11:10 UTC, end at Tue 2020-07-07 12:49:19 UTC. --
Jul 06 17:59:17 centos-0217.novalocal logstash[4179]: Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j
Jul 06 17:59:18 centos-0217.novalocal logstash[4179]: [2020-07-06T17:59:18,344][FATAL][logstash.runner          ] An unexpected error occ
Jul 06 17:59:18 centos-0217.novalocal logstash[4179]: [2020-07-06T17:59:18,460][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateE
Jul 06 17:59:18 centos-0217.novalocal systemd[1]: logstash.service: main process exited, code=exited, status=1/FAILURE
Jul 06 17:59:18 centos-0217.novalocal systemd[1]: Unit logstash.service entered failed state.
Jul 06 17:59:18 centos-0217.novalocal systemd[1]: logstash.service failed.
Jul 06 17:59:18 centos-0217.novalocal systemd[1]: logstash.service holdoff time over, scheduling restart.
Jul 06 17:59:18 centos-0217.novalocal systemd[1]: Stopped logstash.
Jul 06 17:59:18 centos-0217.novalocal systemd[1]: Started logstash.
Jul 06 17:59:34 centos-0217.novalocal logstash[4246]: Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j
Jul 06 17:59:35 centos-0217.novalocal logstash[4246]: [2020-07-06T17:59:35,155][FATAL][logstash.runner          ] An unexpected error occ
Jul 06 17:59:35 centos-0217.novalocal logstash[4246]: [2020-07-06T17:59:35,275][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateE
Jul 06 17:59:35 centos-0217.novalocal systemd[1]: logstash.service: main process exited, code=exited, status=1/FAILURE
Jul 06 17:59:35 centos-0217.novalocal systemd[1]: Unit logstash.service entered failed state.
Jul 06 17:59:35 centos-0217.novalocal systemd[1]: logstash.service failed.
Jul 06 17:59:35 centos-0217.novalocal systemd[1]: logstash.service holdoff time over, scheduling restart.
Jul 06 17:59:35 centos-0217.novalocal systemd[1]: Stopped logstash.
Jul 06 17:59:35 centos-0217.novalocal systemd[1]: Started logstash.
Jul 06 17:59:51 centos-0217.novalocal logstash[4310]: Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j
Jul 06 17:59:52 centos-0217.novalocal logstash[4310]: [2020-07-06T17:59:52,210][FATAL][logstash.runner          ] An unexpected error occ
Jul 06 17:59:52 centos-0217.novalocal logstash[4310]: [2020-07-06T17:59:52,333][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateE
Jul 06 17:59:52 centos-0217.novalocal systemd[1]: logstash.service: main process exited, code=exited, status=1/FAILURE
Jul 06 17:59:52 centos-0217.novalocal systemd[1]: Unit logstash.service entered failed state.
Jul 06 17:59:52 centos-0217.novalocal systemd[1]: logstash.service failed.
Jul 06 17:59:52 centos-0217.novalocal systemd[1]: logstash.service holdoff time over, scheduling restart.
Jul 06 17:59:52 centos-0217.novalocal systemd[1]: Stopped logstash.
Jul 06 17:59:52 centos-0217.novalocal systemd[1]: Started logstash.
Jul 06 18:00:08 centos-0217.novalocal logstash[4373]: Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j
Jul 06 18:00:09 centos-0217.novalocal logstash[4373]: [2020-07-06T18:00:09,427][FATAL][logstash.runner          ] An unexpected error occ
Jul 06 18:00:09 centos-0217.novalocal logstash[4373]: [2020-07-06T18:00:09,593][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateE
Jul 06 18:00:09 centos-0217.novalocal systemd[1]: logstash.service: main process exited, code=exited, status=1/FAILURE
Jul 06 18:00:09 centos-0217.novalocal systemd[1]: Unit logstash.service entered failed state.
Jul 06 18:00:09 centos-0217.novalocal systemd[1]: logstash.service failed.
Jul 06 18:00:09 centos-0217.novalocal systemd[1]: logstash.service holdoff time over, scheduling restart.
Jul 06 18:00:09 centos-0217.novalocal systemd[1]: Stopped logstash.
Jul 06 18:00:09 centos-0217.novalocal systemd[1]: Started logstash.
lines 1-37

/>

This what I am facing the issue.

Trying to create index but not able to create the index from logstash file.

I would expect that to generate errors. ssl is disabled by default, so it should complain about ssl_key and ssl_certificate being set.

:exception=>#<LogStash::ConfigurationError: Certificate or Certificate Key not configured>

That indicates that ssl is enabled, but either ssl_key or ssl_certificate is nil.

{:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}

Nothing is listening on port 9200 on localhost.

Thanks @Badger for your reply. This is the logstash log
<

[root@centos-0217 logstash]# tail -f logstash-plain.log
io.netty.channel.DefaultChannelPipeline.bind(io/netty/channel/DefaultChannelPipeline.java:1019)
io.netty.channel.AbstractChannel.bind(io/netty/channel/AbstractChannel.java:254)
io.netty.bootstrap.AbstractBootstrap$2.run(io/netty/bootstrap/AbstractBootstrap.java:366)
io.netty.util.concurrent.AbstractEventExecutor.safeExecute(io/netty/util/concurrent/AbstractEventExecutor.java:163)
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(io/netty/util/concurrent/SingleThreadEventExecutor.java:404)
io.netty.channel.nio.NioEventLoop.run(io/netty/channel/nio/NioEventLoop.java:462)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(io/netty/util/concurrent/SingleThreadEventExecutor.java:897)
io.netty.util.concurrent.FastThreadLocalRunnable.run(io/netty/util/concurrent/FastThreadLocalRunnable.java:30)
java.lang.Thread.run(java/lang/Thread.java:748)
[2020-07-08T05:26:40,005][INFO ][org.logstash.beats.Server][main][d2cac8a23dcc7247989095ab094ee310d43be9ee407b6799c442dbe469d6d1a8] Starting server on port: 5044
[2020-07-08T05:26:46,241][ERROR][logstash.javapipeline    ][main][d2cac8a23dcc7247989095ab094ee310d43be9ee407b6799c442dbe469d6d1a8] A plugin had an unrecoverable error. Will restart this plugin.
  Pipeline_id:main
  Plugin: <LogStash::Inputs::Beats ssl_certificate=>"/etc/pki/tls/certs/logstash_frwrd.crt", id=>"d2cac8a23dcc7247989095ab094ee310d43be9ee407b6799c442dbe469d6d1a8", type=>"syslog", ssl_key=>"/etc/pki/tls/private/logstash-forwarder.key", port=>5044, enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_06e09967-e630-4de0-b13e-6fdb687a5219", enable_metric=>true, charset=>"UTF-8">, host=>"0.0.0.0", ssl=>false, add_hostname=>false, ssl_verify_mode=>"none", ssl_peer_metadata=>false, include_codec_tag=>true, ssl_handshake_timeout=>10000, tls_min_version=>1, tls_max_version=>1.2, cipher_suites=>["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"], client_inactivity_timeout=>60, executor_threads=>24>
  Error: Address already in use
  Exception: Java::JavaNet::BindException
  Stack: sun.nio.ch.Net.bind0(Native Method)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:433)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:425)
sun.nio.ch.ServerSocketChannelImpl.bind(sun/nio/ch/ServerSocketChannelImpl.java:220)
io.netty.channel.socket.nio.NioServerSocketChannel.doBind(io/netty/channel/socket/nio/NioServerSocketChannel.java:130)
io.netty.channel.AbstractChannel$AbstractUnsafe.bind(io/netty/channel/AbstractChannel.java:558)
io.netty.channel.DefaultChannelPipeline$HeadContext.bind(io/netty/channel/DefaultChannelPipeline.java:1358)
io.netty.channel.AbstractChannelHandlerContext.invokeBind(io/netty/channel/AbstractChannelHandlerContext.java:501)
io.netty.channel.AbstractChannelHandlerContext.bind(io/netty/channel/AbstractChannelHandlerContext.java:486)
io.netty.channel.DefaultChannelPipeline.bind(io/netty/channel/DefaultChannelPipeline.java:1019)
io.netty.channel.AbstractChannel.bind(io/netty/channel/AbstractChannel.java:254)
io.netty.bootstrap.AbstractBootstrap$2.run(io/netty/bootstrap/AbstractBootstrap.java:366)
io.netty.util.concurrent.AbstractEventExecutor.safeExecute(io/netty/util/concurrent/AbstractEventExecutor.java:163)
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(io/netty/util/concurrent/SingleThreadEventExecutor.java:404)
io.netty.channel.nio.NioEventLoop.run(io/netty/channel/nio/NioEventLoop.java:462)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(io/netty/util/concurrent/SingleThreadEventExecutor.java:897)
io.netty.util.concurrent.FastThreadLocalRunnable.run(io/netty/util/concurrent/FastThreadLocalRunnable.java:30)
java.lang.Thread.run(java/lang/Thread.java:748)
[2020-07-08T05:26:47,247][INFO ][org.logstash.beats.Server][main][d2cac8a23dcc7247989095ab094ee310d43be9ee407b6799c442dbe469d6d1a8] Starting server on port: 5044
[2020-07-08T05:26:53,482][ERROR][logstash.javapipeline    ][main][d2cac8a23dcc7247989095ab094ee310d43be9ee407b6799c442dbe469d6d1a8] A plugin had an unrecoverable error. Will restart this plugin.
  Pipeline_id:main
  Plugin: <LogStash::Inputs::Beats ssl_certificate=>"/etc/pki/tls/certs/logstash_frwrd.crt", id=>"d2cac8a23dcc7247989095ab094ee310d43be9ee407b6799c442dbe469d6d1a8", type=>"syslog", ssl_key=>"/etc/pki/tls/private/logstash-forwarder.key", port=>5044, enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_06e09967-e630-4de0-b13e-6fdb687a5219", enable_metric=>true, charset=>"UTF-8">, host=>"0.0.0.0", ssl=>false, add_hostname=>false, ssl_verify_mode=>"none", ssl_peer_metadata=>false, include_codec_tag=>true, ssl_handshake_timeout=>10000, tls_min_version=>1, tls_max_version=>1.2, cipher_suites=>["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"], client_inactivity_timeout=>60, executor_threads=>24>
  Error: Address already in use
  Exception: Java::JavaNet::BindException
  Stack: sun.nio.ch.Net.bind0(Native Method)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:433)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:425)
sun.nio.ch.ServerSocketChannelImpl.bind(sun/nio/ch/ServerSocketChannelImpl.java:220)
io.netty.channel.socket.nio.NioServerSocketChannel.doBind(io/netty/channel/socket/nio/NioServerSocketChannel.java:130)
io.netty.channel.AbstractChannel$AbstractUnsafe.bind(io/netty/channel/AbstractChannel.java:558)
io.netty.channel.DefaultChannelPipeline$HeadContext.bind(io/netty/channel/DefaultChannelPipeline.java:1358)
io.netty.channel.AbstractChannelHandlerContext.invokeBind(io/netty/channel/AbstractChannelHandlerContext.java:501)
io.netty.channel.AbstractChannelHandlerContext.bind(io/netty/channel/AbstractChannelHandlerContext.java:486)
io.netty.channel.DefaultChannelPipeline.bind(io/netty/channel/DefaultChannelPipeline.java:1019)
io.netty.channel.AbstractChannel.bind(io/netty/channel/AbstractChannel.java:254)
io.netty.bootstrap.AbstractBootstrap$2.run(io/netty/bootstrap/AbstractBootstrap.java:366)
io.netty.util.concurrent.AbstractEventExecutor.safeExecute(io/netty/util/concurrent/AbstractEventExecutor.java:163)
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(io/netty/util/concurrent/SingleThreadEventExecutor.java:404)
io.netty.channel.nio.NioEventLoop.run(io/netty/channel/nio/NioEventLoop.java:462)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(io/netty/util/concurrent/SingleThreadEventExecutor.java:897)
io.netty.util.concurrent.FastThreadLocalRunnable.run(io/netty/util/concurrent/FastThreadLocalRunnable.java:30)
java.lang.Thread.run(java/lang/Thread.java:748)
[2020-07-08T05:26:54,489][INFO ][org.logstash.beats.Server][main][d2cac8a23dcc7247989095ab094ee310d43be9ee407b6799c442dbe469d6d1a8] Starting server on port: 5044


/>

@sumitsahay There is nothing on localhost:9200, according to logs. It looks like logstash cant reach elasticsearch on localhost. Is it installed on the same server or in docker? Could you describe it? Well I would recommend you to start over because I thing there are more than one problem.

  1. Connect filebeat to logstash and let logstash only print messages to logs. Do not use certificates,etc... If you can see messages, go to step 2.
  2. Connect logstash to elasticsearch, without certs, etc... If you can see new index, go to step 3.
  3. Add certs

When you split your problem into smaller steps, you can find out where is the problem. You are configure ssl but your logstash cant even reach elasticsearch on localhost.

Thanks @dorinand for your reply. ELK running on the master node and in Kibana dashboard we are getting the logs, what we are trying to acheive ! we installed filebeat on one client machine now we want to send logs from client machine to ELK master node that is try to acheive, I am providing the latest logs here for logstash
This is the latest log of logstash after changing couple of things
<

[root@centos-0217 logstash]# tail -f logstash-plain.log
io.netty.channel.DefaultChannelPipeline.bind(io/netty/channel/DefaultChannelPipeline.java:1019)
io.netty.channel.AbstractChannel.bind(io/netty/channel/AbstractChannel.java:254)
io.netty.bootstrap.AbstractBootstrap$2.run(io/netty/bootstrap/AbstractBootstrap.java:366)
io.netty.util.concurrent.AbstractEventExecutor.safeExecute(io/netty/util/concurrent/AbstractEventExecutor.java:163)
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(io/netty/util/concurrent/SingleThreadEventExecutor.java:404)
io.netty.channel.nio.NioEventLoop.run(io/netty/channel/nio/NioEventLoop.java:462)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(io/netty/util/concurrent/SingleThreadEventExecutor.java:897)
io.netty.util.concurrent.FastThreadLocalRunnable.run(io/netty/util/concurrent/FastThreadLocalRunnable.java:30)
java.lang.Thread.run(java/lang/Thread.java:748)
[2020-07-08T07:17:12,754][INFO ][org.logstash.beats.Server][main][d65513b4781e75436f810d62d5accdebea0d3617e624e81e622027fa69d899f0] Starting server on port: 5044
[2020-07-08T07:17:18,997][ERROR][logstash.javapipeline    ][main][d65513b4781e75436f810d62d5accdebea0d3617e624e81e622027fa69d899f0] A plugin had an unrecoverable error. Will restart this plugin.
  Pipeline_id:main
  Plugin: <LogStash::Inputs::Beats ssl_certificate=>"/etc/pki/tls/certs/logstash.crt", id=>"d65513b4781e75436f810d62d5accdebea0d3617e624e81e622027fa69d899f0", type=>"syslog", ssl_key=>"/etc/pki/tls/private/logstash.key", port=>5044, enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_b520725c-4460-4a9e-bdbb-73d628c09c4f", enable_metric=>true, charset=>"UTF-8">, host=>"0.0.0.0", ssl=>false, add_hostname=>false, ssl_verify_mode=>"none", ssl_peer_metadata=>false, include_codec_tag=>true, ssl_handshake_timeout=>10000, tls_min_version=>1, tls_max_version=>1.2, cipher_suites=>["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"], client_inactivity_timeout=>60, executor_threads=>24>
  Error: Address already in use
  Exception: Java::JavaNet::BindException
  Stack: sun.nio.ch.Net.bind0(Native Method)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:433)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:425)
sun.nio.ch.ServerSocketChannelImpl.bind(sun/nio/ch/ServerSocketChannelImpl.java:220)
io.netty.channel.socket.nio.NioServerSocketChannel.doBind(io/netty/channel/socket/nio/NioServerSocketChannel.java:130)
io.netty.channel.AbstractChannel$AbstractUnsafe.bind(io/netty/channel/AbstractChannel.java:558)
io.netty.channel.DefaultChannelPipeline$HeadContext.bind(io/netty/channel/DefaultChannelPipeline.java:1358)
io.netty.channel.AbstractChannelHandlerContext.invokeBind(io/netty/channel/AbstractChannelHandlerContext.java:501)
io.netty.channel.AbstractChannelHandlerContext.bind(io/netty/channel/AbstractChannelHandlerContext.java:486)
io.netty.channel.DefaultChannelPipeline.bind(io/netty/channel/DefaultChannelPipeline.java:1019)
io.netty.channel.AbstractChannel.bind(io/netty/channel/AbstractChannel.java:254)
io.netty.bootstrap.AbstractBootstrap$2.run(io/netty/bootstrap/AbstractBootstrap.java:366)
io.netty.util.concurrent.AbstractEventExecutor.safeExecute(io/netty/util/concurrent/AbstractEventExecutor.java:163)
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(io/netty/util/concurrent/SingleThreadEventExecutor.java:404)
io.netty.channel.nio.NioEventLoop.run(io/netty/channel/nio/NioEventLoop.java:462)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(io/netty/util/concurrent/SingleThreadEventExecutor.java:897)
io.netty.util.concurrent.FastThreadLocalRunnable.run(io/netty/util/concurrent/FastThreadLocalRunnable.java:30)
java.lang.Thread.run(java/lang/Thread.java:748)
[2020-07-08T07:17:20,003][INFO ][org.logstash.beats.Server][main][d65513b4781e75436f810d62d5accdebea0d3617e624e81e622027fa69d899f0] Starting server on port: 5044
[2020-07-08T07:17:26,239][ERROR][logstash.javapipeline    ][main][d65513b4781e75436f810d62d5accdebea0d3617e624e81e622027fa69d899f0] A plugin had an unrecoverable error. Will restart this plugin.
  Pipeline_id:main
  Plugin: <LogStash::Inputs::Beats ssl_certificate=>"/etc/pki/tls/certs/logstash.crt", id=>"d65513b4781e75436f810d62d5accdebea0d3617e624e81e622027fa69d899f0", type=>"syslog", ssl_key=>"/etc/pki/tls/private/logstash.key", port=>5044, enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_b520725c-4460-4a9e-bdbb-73d628c09c4f", enable_metric=>true, charset=>"UTF-8">, host=>"0.0.0.0", ssl=>false, add_hostname=>false, ssl_verify_mode=>"none", ssl_peer_metadata=>false, include_codec_tag=>true, ssl_handshake_timeout=>10000, tls_min_version=>1, tls_max_version=>1.2, cipher_suites=>["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"], client_inactivity_timeout=>60, executor_threads=>24>
  Error: Address already in use
  Exception: Java::JavaNet::BindException
  Stack: sun.nio.ch.Net.bind0(Native Method)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:433)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:425)
sun.nio.ch.ServerSocketChannelImpl.bind(sun/nio/ch/ServerSocketChannelImpl.java:220)
io.netty.channel.socket.nio.NioServerSocketChannel.doBind(io/netty/channel/socket/nio/NioServerSocketChannel.java:130)
io.netty.channel.AbstractChannel$AbstractUnsafe.bind(io/netty/channel/AbstractChannel.java:558)
io.netty.channel.DefaultChannelPipeline$HeadContext.bind(io/netty/channel/DefaultChannelPipeline.java:1358)
io.netty.channel.AbstractChannelHandlerContext.invokeBind(io/netty/channel/AbstractChannelHandlerContext.java:501)
io.netty.channel.AbstractChannelHandlerContext.bind(io/netty/channel/AbstractChannelHandlerContext.java:486)
io.netty.channel.DefaultChannelPipeline.bind(io/netty/channel/DefaultChannelPipeline.java:1019)
io.netty.channel.AbstractChannel.bind(io/netty/channel/AbstractChannel.java:254)
io.netty.bootstrap.AbstractBootstrap$2.run(io/netty/bootstrap/AbstractBootstrap.java:366)
io.netty.util.concurrent.AbstractEventExecutor.safeExecute(io/netty/util/concurrent/AbstractEventExecutor.java:163)
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(io/netty/util/concurrent/SingleThreadEventExecutor.java:404)
io.netty.channel.nio.NioEventLoop.run(io/netty/channel/nio/NioEventLoop.java:462)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(io/netty/util/concurrent/SingleThreadEventExecutor.java:897)
io.netty.util.concurrent.FastThreadLocalRunnable.run(io/netty/util/concurrent/FastThreadLocalRunnable.java:30)
java.lang.Thread.run(java/lang/Thread.java:748)
[2020-07-08T07:17:27,244][INFO ][org.logstash.beats.Server][main][d65513b4781e75436f810d62d5accdebea0d3617e624e81e622027fa69d899f0] Starting server on port: 5044

/>

Not able to find out what exactly the problem is.

After commenting certificates logs is as follows
<

[root@centos-0217 logstash]# tail -f logstash-plain.log
io.netty.channel.DefaultChannelPipeline.bind(io/netty/channel/DefaultChannelPipeline.java:1019)
io.netty.channel.AbstractChannel.bind(io/netty/channel/AbstractChannel.java:254)
io.netty.bootstrap.AbstractBootstrap$2.run(io/netty/bootstrap/AbstractBootstrap.java:366)
io.netty.util.concurrent.AbstractEventExecutor.safeExecute(io/netty/util/concurrent/AbstractEventExecutor.java:163)
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(io/netty/util/concurrent/SingleThreadEventExecutor.java:404)
io.netty.channel.nio.NioEventLoop.run(io/netty/channel/nio/NioEventLoop.java:462)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(io/netty/util/concurrent/SingleThreadEventExecutor.java:897)
io.netty.util.concurrent.FastThreadLocalRunnable.run(io/netty/util/concurrent/FastThreadLocalRunnable.java:30)
java.lang.Thread.run(java/lang/Thread.java:748)
[2020-07-08T07:50:59,929][INFO ][org.logstash.beats.Server][main][013402de23ccffe567b9b0c7f7d4474bb194c460593fc100d7a7777c24a30797] Starting server on port: 5044
[2020-07-08T07:51:06,168][ERROR][logstash.javapipeline    ][main][013402de23ccffe567b9b0c7f7d4474bb194c460593fc100d7a7777c24a30797] A plugin had an unrecoverable error. Will restart this plugin.
  Pipeline_id:main
  Plugin: <LogStash::Inputs::Beats type=>"syslog", port=>5044, id=>"013402de23ccffe567b9b0c7f7d4474bb194c460593fc100d7a7777c24a30797", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_78837116-1b65-4b3b-b455-fc041b8e9b4b", enable_metric=>true, charset=>"UTF-8">, host=>"0.0.0.0", ssl=>false, add_hostname=>false, ssl_verify_mode=>"none", ssl_peer_metadata=>false, include_codec_tag=>true, ssl_handshake_timeout=>10000, tls_min_version=>1, tls_max_version=>1.2, cipher_suites=>["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"], client_inactivity_timeout=>60, executor_threads=>24>
  Error: Address already in use
  Exception: Java::JavaNet::BindException
  Stack: sun.nio.ch.Net.bind0(Native Method)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:433)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:425)
sun.nio.ch.ServerSocketChannelImpl.bind(sun/nio/ch/ServerSocketChannelImpl.java:220)
io.netty.channel.socket.nio.NioServerSocketChannel.doBind(io/netty/channel/socket/nio/NioServerSocketChannel.java:130)
io.netty.channel.AbstractChannel$AbstractUnsafe.bind(io/netty/channel/AbstractChannel.java:558)
io.netty.channel.DefaultChannelPipeline$HeadContext.bind(io/netty/channel/DefaultChannelPipeline.java:1358)
io.netty.channel.AbstractChannelHandlerContext.invokeBind(io/netty/channel/AbstractChannelHandlerContext.java:501)
io.netty.channel.AbstractChannelHandlerContext.bind(io/netty/channel/AbstractChannelHandlerContext.java:486)
io.netty.channel.DefaultChannelPipeline.bind(io/netty/channel/DefaultChannelPipeline.java:1019)
io.netty.channel.AbstractChannel.bind(io/netty/channel/AbstractChannel.java:254)
io.netty.bootstrap.AbstractBootstrap$2.run(io/netty/bootstrap/AbstractBootstrap.java:366)
io.netty.util.concurrent.AbstractEventExecutor.safeExecute(io/netty/util/concurrent/AbstractEventExecutor.java:163)
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(io/netty/util/concurrent/SingleThreadEventExecutor.java:404)
io.netty.channel.nio.NioEventLoop.run(io/netty/channel/nio/NioEventLoop.java:462)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(io/netty/util/concurrent/SingleThreadEventExecutor.java:897)
io.netty.util.concurrent.FastThreadLocalRunnable.run(io/netty/util/concurrent/FastThreadLocalRunnable.java:30)
java.lang.Thread.run(java/lang/Thread.java:748)
[2020-07-08T07:51:07,173][INFO ][org.logstash.beats.Server][main][013402de23ccffe567b9b0c7f7d4474bb194c460593fc100d7a7777c24a30797] Starting server on port: 5044
[2020-07-08T07:51:13,413][ERROR][logstash.javapipeline    ][main][013402de23ccffe567b9b0c7f7d4474bb194c460593fc100d7a7777c24a30797] A plugin had an unrecoverable error. Will restart this plugin.

/>

I can see this error

Did you check it first?

Please, specify why you was not able to connect to elasticsearch, and the changes you made. In the future, it could help somebody to debug their issue. Thank you.

Thanks @dorinand for your reply, I changed my logstash.conf file now no error is coming

<

[root@centos-0217 conf.d]# tail -f /var/log/logstash/logstash-plain.log
[2020-07-08T14:20:19,971][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7}
[2020-07-08T14:20:19,977][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2020-07-08T14:20:20,078][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2020-07-08T14:20:20,198][INFO ][logstash.outputs.elasticsearch][main] Using default mapping template
[2020-07-08T14:20:20,262][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][main] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been created for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[2020-07-08T14:20:20,277][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>24, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>3000, "pipeline.sources"=>["/etc/logstash/conf.d/logstash.conf"], :thread=>"#<Thread:0x7574c6a4 run>"}
[2020-07-08T14:20:20,321][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2020-07-08T14:20:21,751][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2020-07-08T14:20:21,858][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2020-07-08T14:20:22,240][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9601}

/>

And this is my logstash.conf file

<

input {
  stdin { }
}
output {
  elasticsearch {
      hosts => ["localhost:9200"]
          index => "waflogstash"
        }

}

/>
But index is not created with the same name.

What do you mean with the same name? Could you describe deeper what you have, what would you expect?