Logstash monitoring is not shown in kibana monitoring UI

Hi i am using logstash version 7.0 and i set the x-pack monitoring for logstash , but the monitoring is not shown in the kibana monitoring UI and even the monitoring indices for logstash are also not getting created.

below is the my logstash.yml configuration

# X-Pack Monitoring
# https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html
xpack.license.self_generated.type: trial
 xpack.security.enabled: true
 xpack.watcher.enabled: true
 xpack.monitoring.enabled: true
 xpack.monitoring.collection.enabled: true
 xpack.monitoring.elasticsearch.username: logstash-system
 xpack.monitoring.elasticsearch.password: elasticlogstash
 xpack.monitoring.elasticsearch.hosts: ["http://10.160.0.5:9200"]
#xpack.monitoring.elasticsearch.ssl.certificate_authority: [ "/path/to/ca.crt" ]
#xpack.monitoring.elasticsearch.ssl.truststore.path: path/to/file
#xpack.monitoring.elasticsearch.ssl.truststore.password: password
#xpack.monitoring.elasticsearch.ssl.keystore.path: /path/to/file
#xpack.monitoring.elasticsearch.ssl.keystore.password: password
#xpack.monitoring.elasticsearch.ssl.verification_mode: certificate
#
# X-Pack Management
# https://www.elastic.co/guide/en/logstash/current/logstash-centralized-pipeline-management.html
 xpack.management.enabled: true
 xpack.management.pipeline.id: ["main", "apache_logs"]
#xpack.management.elasticsearch.username: logstash_admin_user
#xpack.management.elasticsearch.password: password
#xpack.management.elasticsearch.hosts: ["https://es1:9200", "https://es2:9200"]
#xpack.management.elasticsearch.ssl.certificate_authority: [ "/path/to/ca.crt" ]
#xpack.management.elasticsearch.ssl.truststore.path: /path/to/file
#xpack.management.elasticsearch.ssl.truststore.password: password
#xpack.management.elasticsearch.ssl.keystore.path: /path/to/file
#xpack.management.elasticsearch.ssl.keystore.password: password
#xpack.management.elasticsearch.ssl.verification_mode: certificate
#xpack.management.elasticsearch.sniffing: false
#xpack.management.logstash.poll_interval: 5s

any suggestions/comments please

Hi @siddaram_kj,

Are you seeing any error logs in either ES or the logstash log files? Can you also query for your cluster settings (GET /_cluster/settings) and return those here?

One minor thing is that there seem to be some non-logstash related configurations in your logstash.yml, notably:

xpack.license.self_generated.type: trial
xpack.security.enabled: true
xpack.watcher.enabled: true

Did you mean to add these to your elasticsearch.yml?

HI @chrisronline ,

below is the output of the GET /_cluster/settings

{
  "persistent" : {
    "xpack" : {
      "monitoring" : {
        "collection" : {
          "enabled" : "true"
        }
      }
    }
  },
  "transient" : { }
}

i have added below settings both in the elasticsearch.yml and logstash.yml

xpack.license.self_generated.type: trial
xpack.security.enabled: true
xpack.watcher.enabled: true

i have checked the ES and logstash logs , there were no error logs.

A few more questions:

  1. Can you return the results of GET _cat/indices?
  2. Can you start logstash in debug mode and find any error/logs around monitoring?

Hi @chrisronline
below is the output of the the command GET _cat/indices

green open .monitoring-alerts-7            qJ9v2ENNQZ2HDgNmcZh0qg 1 1    14     0  64.1kb    32kb
green open .watcher-history-9-2019.05.10   ohrqSJuWSn2o_AL6g1vK1A 1 1   686     0   2.3mb   1.1mb
green open .watcher-history-9-2019.05.14   -fROfo1rQNSo9Z9MhbH3bQ 1 1   188     0 959.5kb 479.7kb
green open .watcher-history-9-2019.05.16   gZRq4xRAQqecTIqDoEcCzA 1 1   298     0   1.1mb 591.4kb
green open .kibana_1                       0fHut0uvTHGiE40jBChVuw 1 1    70     8 247.1kb 123.5kb
green open .monitoring-kibana-7-2019.05.14 8tzZbTBuTFqk2UA4ESTgog 1 1   189     0 399.3kb 199.6kb
green open .monitoring-es-7-2019.05.16     o40jgQK7RgWdsLwl8G8S1Q 1 1 10992 15448  15.5mb   7.7mb
green open .watcher-history-9-2019.05.17   2KRdLAQ3Q4CrBkrJQqLb3Q 1 1    38     0 349.6kb 209.1kb
green open .triggered_watches              BTVLZ9yAQreb-wJnCPuajA 1 1     0     0  95.4kb  47.7kb
green open apacheaccess--logs-2019.05.16   X8s3L_GgSAScEJLeIK8vpg 1 1   243     0   459kb 229.5kb
green open .kibana_task_manager            o-IJJmRQQjuyIj9GjFApEA 1 1     2     0 110.3kb  55.1kb
green open apacheaccess--logs-2019.05.12   XwXXtNffRwaNkVvapO8vAw 1 1  1625     0   2.4mb   1.2mb
green open apacheaccess--logs-2019.05.14   JggR8vPwQue7zLAqRx06WA 1 1    10     0  72.3kb  36.1kb
green open .monitoring-kibana-7-2019.05.16 -o7FaDEmSTSPLytcniZhLQ 1 1   294     0 387.5kb 185.4kb
green open .security-7                     91lFotuASeOwDgBAG0RPJQ 1 1    10     0 109.8kb  54.9kb
green open apacheaccess--logs-2019.05.10   7bhBdVsXSiGT9j2yOghjbA 1 1   292     0 814.6kb 407.3kb
green open .monitoring-es-7-2019.05.15     eRpALTZeRV-6AMJz2AGWWw 1 1  1527  2244     3mb   1.4mb
green open .monitoring-kibana-7-2019.05.10 9kHj-h7_RHqT-PToOQ5mAw 1 1   651     0 682.1kb   341kb
green open .monitoring-kibana-7-2019.05.12 thsq8yyQR0GGttCXhZyCYw 1 1  1037     0   923kb 461.5kb
green open kibana_sample_data_flights      iRDX_Z_YS7q7rGPOHcSHog 1 1 13059     0  12.8mb   6.4mb
green open .watcher-history-9-2019.05.15   S90XTbSCSnGbLF3U71UqfQ 1 1    64     0   302kb   151kb
green open .monitoring-es-7-2019.05.17     QzBggClUTD2cAjvVU_PEhw 1 1  1874  2012     3mb   1.4mb
green open .monitoring-es-7-2019.05.10     _VgW8JaDTdS4R6MgMYZHkA 1 1 15828  4836  20.6mb  10.3mb
green open .watches                        X20Eg5bsTemdUlYt8KNRpg 1 1     6    30 883.8kb 431.3kb
green open apacheaccess--logs-2019.05.17   tdLg9ywHR2mdypAicmJ2XA 1 1    53     0 192.6kb  96.3kb
green open .monitoring-es-7-2019.05.12     XhohQlrLQE-ZJJf1ivW3Kg 1 1 27459  7174  34.1mb    17mb
green open .monitoring-es-7-2019.05.09     SclyOorvTHuOM-GvxlE0NA 1 1   706   388   1.8mb 940.2kb
green open .monitoring-kibana-7-2019.05.17 kYpLh2ASQ4Su1ocN-w6ttA 1 1    39     0 394.5kb 176.3kb
green open .monitoring-es-7-2019.05.14     YdH6FXs4St-U-T5WrHaePQ 1 1  5900  7718    10mb     5mb
green open .monitoring-kibana-7-2019.05.09 FevtdEfASTOpqAR94Ja3Aw 1 1    47     0  99.4kb  49.7kb
green open .watcher-history-9-2019.05.12   6PFmpk5GSFiUx5OfHbnMuA 1 1  1026     0   3.5mb   1.7mb

when i run the logstash in debug mode no logs file is getting created but logstash is running perfectly.

Can you restart logstash and share the log messages you see on startup? Please start in debug mode and include all the startup-related logs

below are the logs when i start the logstash

WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2019-05-19 16:34:28.396 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2019-05-19 16:34:28.433 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"7.0.1"}
[INFO ] 2019-05-19 16:34:44.529 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx@10.160.0.5:9200/]}}
[WARN ] 2019-05-19 16:34:45.389 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://elastic:xxxxxx@10.160.0.5:9200/"}
[INFO ] 2019-05-19 16:34:46.026 [[main]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>7}
[WARN ] 2019-05-19 16:34:46.038 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[INFO ] 2019-05-19 16:34:46.138 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//10.160.0.5:9200"]}
[INFO ] 2019-05-19 16:34:46.183 [Ruby-0-Thread-5: :1] elasticsearch - Using default mapping template
[INFO ] 2019-05-19 16:34:46.501 [Ruby-0-Thread-5: :1] elasticsearch - Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refre
sh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>fals
e}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"
=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_fl
oat"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[INFO ] 2019-05-19 16:34:46.921 [[main]-pipeline-manager] geoip - Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.0-java/vendor/GeoLite2-
City.mmdb"}
[INFO ] 2019-05-19 16:34:47.606 [[main]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pi
peline.max_inflight"=>125, :thread=>"#<Thread:0x32d4ace0 run>"}
[INFO ] 2019-05-19 16:34:48.791 [[main]-pipeline-manager] file - No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/usr/share/logstash/data/plugins/inputs/file/.sinc
edb_8636a19711465cc96926000984eb4005", :path=>["/var/log/apache2/access.log"]}
[INFO ] 2019-05-19 16:34:48.897 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}
[INFO ] 2019-05-19 16:34:49.306 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2019-05-19 16:34:49.333 [[main]<file] observingtail - START, creating Discoverer, Watch with file and sincedb collections
[INFO ] 2019-05-19 16:34:50.778 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}

This looks like a problem. If you are editing some logstash.yml, it doesn't seem like the logstash process is detecting that file. I'd debug that and that might fix up these issues.

I have even specified the logstash.yml file path using --path.setting option while running the logstash, but still the logstash index is not getting created.
Can please tell what configuration i have to change for this.

From https://www.elastic.co/guide/en/logstash/current/running-logstash-command-line.html:

--path.settings SETTINGS_DIR

Set the directory containing the logstash.yml settings file as well as the log4j logging configuration. This can also be set through the LS_SETTINGS_DIR environment variable. The default is the config directory under Logstash home.

Can you share the exact command you're using to start logstash, as well as ls -la of the --path.settings directory? Then, a cat logstash.yml from within the same --path.settings directory?

ls -la inside the /etc/logstash directory result:
i have the changed the permissions of logsatsh.yml fille

drwxrwxr-x   3 root root 4096 May 21 04:53 .
drwxr-xr-x 103 root root 4096 May 21 04:48 ..
drwxrwxr-x   2 root root 4096 Apr 29 13:59 conf.d
-rw-r--r--   1 root root 1829 Apr 29 13:56 jvm.options
-rw-r--r--   1 root root 4987 Apr 29 13:56 log4j2.properties
-rw-r--r--   1 root root  342 Apr 29 13:56 logstash-sample.conf
-rwxrwxrwx   1 root root 8364 May 19 18:01 logstash.yml
-rw-r--r--   1 root root  285 Apr 29 13:56 pipelines.yml
-rw-------   1 root root 1696 Apr 29 13:56 startup.options

below is my logstash.yml file

# Settings file in YAML
#
# Settings can be specified either in hierarchical form, e.g.:
#
#   pipeline:
#     batch:
#       size: 125
#       delay: 5
#
# Or as flat keys:
#
   pipeline.batch.size: 125
   pipeline.batch.delay: 5
#
# ------------  Node identity ------------
#
# Use a descriptive name for the node:
#
 node.name: test
#
# If omitted the node name will default to the machine's host name
#
# ------------ Data path ------------------
#
# Which directory should be used by logstash and its plugins
# for any persistent needs. Defaults to LOGSTASH_HOME/data
#
 path.data: /var/lib/logstash
#
# ------------ Pipeline Settings --------------
#
# The ID of the pipeline.
#
 pipeline.id: main
#
# Set the number of workers that will, in parallel, execute the filters+outputs
# stage of the pipeline.
#
# This defaults to the number of the host's CPU cores.
#
 pipeline.workers: 2
#
# How many events to retrieve from inputs before sending to filters+workers
#
# pipeline.batch.size: 125
#
# How long to wait in milliseconds while polling for the next event
# before dispatching an undersized batch to filters+outputs
#
 pipeline.batch.delay: 50
#
# Force Logstash to exit during shutdown even if there are still inflight
# events in memory. By default, logstash will refuse to quit until all
# received events have been pushed to the outputs.
#
# WARNING: enabling this can lead to data loss during shutdown
#
# pipeline.unsafe_shutdown: false
#
# ------------ Pipeline Configuration Settings --------------
#
# Where to fetch the pipeline configuration for the main pipeline
#
# path.config:
#
# Pipeline configuration string for the main pipeline
#
# config.string:
#
# At startup, test if the configuration is valid and exit (dry run)
#
# config.test_and_exit: false
#
# Periodically check if the configuration has changed and reload the pipeline
# This can also be triggered manually through the SIGHUP signal
#
config.reload.automatic: false
#
# How often to check if the pipeline configuration has changed (in seconds)
#
 config.reload.interval: 3s
#
# Show fully compiled configuration as debug log message
# NOTE: --log.level must be 'debug'
#
# config.debug: false
#
# When enabled, process escaped characters such as \n and \" in strings in the
# pipeline configuration files.
#
# config.support_escapes: false
#
# ------------ Module Settings ---------------
# Define modules here.  Modules definitions must be defined as an array.
# The simple way to see this is to prepend each `name` with a `-`, and keep
# all associated variables under the `name` they are associated with, and
# above the next, like this:
#
# modules:
#   - name: MODULE_NAME
#     var.PLUGINTYPE1.PLUGINNAME1.KEY1: VALUE
#     var.PLUGINTYPE1.PLUGINNAME1.KEY2: VALUE
#     var.PLUGINTYPE2.PLUGINNAME1.KEY1: VALUE
#     var.PLUGINTYPE3.PLUGINNAME3.KEY1: VALUE
#
# Module variable names must be in the format of
#
# var.PLUGIN_TYPE.PLUGIN_NAME.KEY
#
# modules:
#

# ------------ Metrics Settings --------------
#
# Bind address for the metrics REST endpoint
#
# http.host: "127.0.0.1"
#
# Bind port for the metrics REST endpoint, this option also accept a range
# (9600-9700) and logstash will pick up the first available ports.
#
# http.port: 9600-9700
#
# ------------ Debugging Settings --------------
#
# Options for log.level:
#   * fatal
#   * error
#   * warn
#   * info (default)
#   * debug
#   * trace
#
 log.level: debug
 path.logs: /var/log/logstash
#
# ------------ Other Settings --------------
#
# Where to find custom plugins
# path.plugins: []
#
# ------------ X-Pack Settings (not applicable for OSS build)--------------
#
# X-Pack Monitoring
# https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html
 xpack.license.self_generated.type: trial
 xpack.security.enabled: true
 xpack.watcher.enabled: true
 xpack.monitoring.enabled: true
 xpack.monitoring.collection.enabled: true
 xpack.monitoring.elasticsearch.username: logstash-system
 xpack.monitoring.elasticsearch.password: elasticlogstash
 xpack.monitoring.elasticsearch.hosts: ["http://x.x.x.x:9200"]
#xpack.monitoring.elasticsearch.ssl.certificate_authority: [ "/path/to/ca.crt" ]
#xpack.monitoring.elasticsearch.ssl.truststore.path: path/to/file
#xpack.monitoring.elasticsearch.ssl.truststore.password: password
#xpack.monitoring.elasticsearch.ssl.keystore.path: /path/to/file
#xpack.monitoring.elasticsearch.ssl.keystore.password: password
#xpack.monitoring.elasticsearch.ssl.verification_mode: certificate
#xpack.monitoring.elasticsearch.sniffing: false
#xpack.monitoring.collection.interval: 10s
#xpack.monitoring.collection.pipeline.details.enabled: true
#
# X-Pack Management
# https://www.elastic.co/guide/en/logstash/current/logstash-centralized-pipeline-management.html
#xpack.management.enabled: true
#xpack.management.pipeline.id: ["main", "apache_logs"]
#xpack.management.elasticsearch.username: logstash_admin_user
#xpack.management.elasticsearch.password: password
#xpack.management.elasticsearch.hosts: ["https://es1:9200", "https://es2:9200"]
#xpack.management.elasticsearch.ssl.certificate_authority: [ "/path/to/ca.crt" ]
#xpack.management.elasticsearch.ssl.truststore.path: /path/to/file
#xpack.management.elasticsearch.ssl.truststore.password: password
#xpack.management.elasticsearch.ssl.keystore.path: /path/to/file
#xpack.management.elasticsearch.ssl.keystore.password: password
#xpack.management.elasticsearch.ssl.verification_mode: certificate
#xpack.management.elasticsearch.sniffing: false
#xpack.management.logstash.poll_interval: 5s

What command are you using to start logstash? Can you please include the full command?

First I navigate to /use/share/logstash directory and I will use the below command to run logstash
Sudo bin/logstash /home/siddaram094/sample-apache.conf

I think you want to use sudo bin/logstash --path.settings /etc/logstash instead. Try that and lemme know.

Hi @chrisronline
I have started the logstash using the below command
/usr/share/logstash$ sudo bin/logstash --path.settings /etc/logstash/ -f /home/siddaram094/sample-log.conf

but still the monitoring indices are getting create for logstash

these are the logs generated at the console

Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2019-05-21T16:29:03,622][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-05-21T16:29:03,687][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.0.1"}
[2019-05-21T16:29:16,982][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx@10.160.0.5:9200/]}}
[2019-05-21T16:29:17,596][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elastic:xxxxxx@10.160.0.5:9200/"}
[2019-05-21T16:29:17,866][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
[2019-05-21T16:29:17,879][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-05-21T16:29:17,939][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//10.160.0.5:9200"]}
[2019-05-21T16:29:17,987][INFO ][logstash.outputs.elasticsearch] Using default mapping template
[2019-05-21T16:29:18,261][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_in
terval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}},
 {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@
timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}
, "longitude"=>{"type"=>"half_float"}}}}}}}
[2019-05-21T16:29:18,557][INFO ][logstash.filters.geoip   ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.0-java/vendor/GeoLite2-City.m
mdb"}
[2019-05-21T16:29:19,270][INFO ][logstash.javapipeline    ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inf
light"=>125, :thread=>"#<Thread:0x681b32e0 run>"}
[2019-05-21T16:29:20,276][INFO ][logstash.inputs.file     ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/usr/share/logstash/data/plugins/inputs/file/.sincedb_8
636a19711465cc96926000984eb4005", :path=>["/var/log/apache2/access.log"]}
[2019-05-21T16:29:20,404][INFO ][logstash.javapipeline    ] Pipeline started {"pipeline.id"=>"main"}
[2019-05-21T16:29:20,628][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-05-21T16:29:20,689][INFO ][filewatch.observingtail  ] START, creating Discoverer, Watch with file and sincedb collections
[2019-05-21T16:29:21,775][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

What is in this file? By passing this as -f, it will use this config file as the logstash config file. Can you show the contents of this file?

the content of the /home/siddaram094/sample-log.conf is:

input
{
file
{
path => "/var/log/apache2/access.log"
}
}

filter
{
grok
{
match=>
{
"message"=>"%{IPORHOST:ip} %{USER:user} %{USER:ident} \[%{HTTPDATE:time}\] \"%{WORD:httpmethod} / HTTP\/%{NUMBER:httpVer}\" %{NUMBER:statuscode} %{NUMBER:resbytes:int} %{QS:data} %{QS:uagent}"
}
}
geoip
{
source=>"ip"
target=>"clientip"
}
date
{
match=>["time","dd/MMM/YYYY:HH:mm:ss Z"]
target=>"accesstime"
}
useragent
{
source=>"uagent"
target=>"userAgent"
}
}

output
{
elasticsearch
{
hosts=>["x.x.x.x:9200"]
index=>"apacheaccess--logs-%{+YYYY.MM.dd}"
user=>elastic
password=>elastic
}
}

Can you share the output of this:
sudo bin/logstash --path.settings /etc/logstash/ -f /home/siddaram094/sample-log.conf --debug --config.debug?

It will be a lot, but should help identify the issue here

below is the output
due to size limitation i am not to post all the logs in a sinngle post

Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2019-05-21T17:26:19,745][DEBUG][logstash.modules.scaffold] Found module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
[2019-05-21T17:26:19,757][DEBUG][logstash.plugins.registry] Adding plugin to the registry {:name=>"netflow", :type=>:modules, :class=>#<LogStash::Modules::Scaffold:0x16209621 @directory="/usr/share/logstash/modules/netflow/configuration", @module_name="netflow", @kibana_version_parts=["6", "0", "0"]>}
[2019-05-21T17:26:19,766][DEBUG][logstash.modules.scaffold] Found module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[2019-05-21T17:26:19,767][DEBUG][logstash.plugins.registry] Adding plugin to the registry {:name=>"fb_apache", :type=>:modules, :class=>#<LogStash::Modules::Scaffold:0x735c0454 @directory="/usr/share/logstash/modules/fb_apache/configuration", @module_name="fb_apache", @kibana_version_parts=["6", "0", "0"]>}
[2019-05-21T17:26:20,537][DEBUG][logstash.runner          ] -------- Logstash Settings (* means modified) ---------
[2019-05-21T17:26:20,545][DEBUG][logstash.runner          ] node.name: "apache-logstash"
[2019-05-21T17:26:20,545][DEBUG][logstash.runner          ] *path.config: "/home/siddaram094/sample-log.conf"
[2019-05-21T17:26:20,545][DEBUG][logstash.runner          ] path.data: "/usr/share/logstash/data"
[2019-05-21T17:26:20,545][DEBUG][logstash.runner          ] modules.cli: []
[2019-05-21T17:26:20,546][DEBUG][logstash.runner          ] modules: []
[2019-05-21T17:26:20,546][DEBUG][logstash.runner          ] modules_list: []
[2019-05-21T17:26:20,546][DEBUG][logstash.runner          ] modules_variable_list: []
[2019-05-21T17:26:20,547][DEBUG][logstash.runner          ] modules_setup: false
[2019-05-21T17:26:20,548][DEBUG][logstash.runner          ] config.test_and_exit: false
[2019-05-21T17:26:20,548][DEBUG][logstash.runner          ] config.reload.automatic: false
[2019-05-21T17:26:20,548][DEBUG][logstash.runner          ] config.reload.interval: 3000000000
[2019-05-21T17:26:20,548][DEBUG][logstash.runner          ] config.support_escapes: false
[2019-05-21T17:26:20,548][DEBUG][logstash.runner          ] config.field_reference.parser: "STRICT"
[2019-05-21T17:26:20,549][DEBUG][logstash.runner          ] metric.collect: true
[2019-05-21T17:26:20,549][DEBUG][logstash.runner          ] pipeline.id: "main"
[2019-05-21T17:26:20,549][DEBUG][logstash.runner          ] pipeline.system: false
[2019-05-21T17:26:20,549][DEBUG][logstash.runner          ] pipeline.workers: 1
[2019-05-21T17:26:20,549][DEBUG][logstash.runner          ] pipeline.batch.size: 125
[2019-05-21T17:26:20,549][DEBUG][logstash.runner          ] *pipeline.batch.delay: 5 (default: 50)
[2019-05-21T17:26:20,549][DEBUG][logstash.runner          ] pipeline.unsafe_shutdown: false
[2019-05-21T17:26:20,550][DEBUG][logstash.runner          ] pipeline.java_execution: true
[2019-05-21T17:26:20,550][DEBUG][logstash.runner          ] pipeline.reloadable: true
[2019-05-21T17:26:20,550][DEBUG][logstash.runner          ] path.plugins: []
[2019-05-21T17:26:20,550][DEBUG][logstash.runner          ] *config.debug: true (default: false)
[2019-05-21T17:26:20,550][DEBUG][logstash.runner          ] *log.level: "debug" (default: "info")
[2019-05-21T17:26:20,550][DEBUG][logstash.runner          ] version: false
[2019-05-21T17:26:20,550][DEBUG][logstash.runner          ] help: false
[2019-05-21T17:26:20,551][DEBUG][logstash.runner          ] log.format: "plain"
[2019-05-21T17:26:20,551][DEBUG][logstash.runner          ] http.host: "127.0.0.1"
[2019-05-21T17:26:20,551][DEBUG][logstash.runner          ] http.port: 9600..9700
[2019-05-21T17:26:20,551][DEBUG][logstash.runner          ] http.environment: "production"
[2019-05-21T17:26:20,552][DEBUG][logstash.runner          ] queue.type: "memory"
[2019-05-21T17:26:20,552][DEBUG][logstash.runner          ] queue.drain: false
[2019-05-21T17:26:20,552][DEBUG][logstash.runner          ] queue.page_capacity: 67108864
[2019-05-21T17:26:20,552][DEBUG][logstash.runner          ] queue.max_bytes: 1073741824
[2019-05-21T17:26:20,552][DEBUG][logstash.runner          ] queue.max_events: 0
[2019-05-21T17:26:20,552][DEBUG][logstash.runner          ] queue.checkpoint.acks: 1024
[2019-05-21T17:26:20,552][DEBUG][logstash.runner          ] queue.checkpoint.writes: 1024
[2019-05-21T17:26:20,552][DEBUG][logstash.runner          ] queue.checkpoint.interval: 1000
[2019-05-21T17:26:20,552][DEBUG][logstash.runner          ] queue.checkpoint.retry: false
[2019-05-21T17:26:20,553][DEBUG][logstash.runner          ] dead_letter_queue.enable: false
[2019-05-21T17:26:20,553][DEBUG][logstash.runner          ] dead_letter_queue.max_bytes: 1073741824
[2019-05-21T17:26:20,553][DEBUG][logstash.runner          ] slowlog.threshold.warn: -1
[2019-05-21T17:26:20,553][DEBUG][logstash.runner          ] slowlog.threshold.info: -1
[2019-05-21T17:26:20,553][DEBUG][logstash.runner          ] slowlog.threshold.debug: -1
[2019-05-21T17:26:20,553][DEBUG][logstash.runner          ] slowlog.threshold.trace: -1
[2019-05-21T17:26:20,553][DEBUG][logstash.runner          ] keystore.classname: "org.logstash.secret.store.backend.JavaKeyStore"
[2019-05-21T17:26:20,553][DEBUG][logstash.runner          ] *keystore.file: "/etc/logstash/logstash.keystore" (default: "/usr/share/logstash/config/logstash.keystore")
[2019-05-21T17:26:20,554][DEBUG][logstash.runner          ] path.queue: "/usr/share/logstash/data/queue"
[2019-05-21T17:26:20,554][DEBUG][logstash.runner          ] path.dead_letter_queue: "/usr/share/logstash/data/dead_letter_queue"
[2019-05-21T17:26:20,554][DEBUG][logstash.runner          ] *path.settings: "/etc/logstash/" (default: "/usr/share/logstash/config")
[2019-05-21T17:26:20,555][DEBUG][logstash.runner          ] path.logs: "/usr/share/logstash/logs"
[2019-05-21T17:26:20,555][DEBUG][logstash.runner          ] xpack.management.enabled: false
[2019-05-21T17:26:20,556][DEBUG][logstash.runner          ] xpack.management.logstash.poll_interval: 5000000000
[2019-05-21T17:26:20,556][DEBUG][logstash.runner          ] xpack.management.pipeline.id: ["main"]
[2019-05-21T17:26:20,556][DEBUG][logstash.runner          ] xpack.management.elasticsearch.username: "logstash_system"
[2019-05-21T17:26:20,556][DEBUG][logstash.runner          ] xpack.management.elasticsearch.hosts: ["https://localhost:9200"]
[2019-05-21T17:26:20,557][DEBUG][logstash.runner          ] xpack.management.elasticsearch.ssl.verification_mode: "certificate"
[2019-05-21T17:26:20,557][DEBUG][logstash.runner          ] xpack.management.elasticsearch.sniffing: false
[2019-05-21T17:26:20,562][DEBUG][logstash.runner          ] xpack.monitoring.enabled: false
[2019-05-21T17:26:20,563][DEBUG][logstash.runner          ] xpack.monitoring.elasticsearch.hosts: ["http://localhost:9200"]
[2019-05-21T17:26:20,563][DEBUG][logstash.runner          ] xpack.monitoring.collection.interval: 10000000000
[2019-05-21T17:26:20,563][DEBUG][logstash.runner          ] xpack.monitoring.collection.timeout_interval: 600000000000
[2019-05-21T17:26:20,563][DEBUG][logstash.runner          ] xpack.monitoring.elasticsearch.username: "logstash_system"
[2019-05-21T17:26:20,563][DEBUG][logstash.runner          ] xpack.monitoring.elasticsearch.ssl.verification_mode: "certificate"
[2019-05-21T17:26:20,563][DEBUG][logstash.runner          ] xpack.monitoring.elasticsearch.sniffing: false
[2019-05-21T17:26:20,563][DEBUG][logstash.runner          ] xpack.monitoring.collection.pipeline.details.enabled: true
[2019-05-21T17:26:20,563][DEBUG][logstash.runner          ] xpack.monitoring.collection.config.enabled: true
[2019-05-21T17:26:20,563][DEBUG][logstash.runner          ] node.uuid: ""
[2019-05-21T17:26:20,564][DEBUG][logstash.runner          ] --------------- Logstash Settings -------------------
[2019-05-21T17:26:20,743][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-05-21T17:26:20,768][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.0.1"}
[2019-05-21T17:26:20,866][DEBUG][logstash.agent           ] Setting up metric collection
[2019-05-21T17:26:21,007][DEBUG][logstash.instrument.periodicpoller.os] Starting {:polling_interval=>5, :polling_timeout=>120}
[2019-05-21T17:26:21,580][DEBUG][logstash.instrument.periodicpoller.jvm] Starting {:polling_interval=>5, :polling_timeout=>120}
[2019-05-21T17:26:21,796][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}