Logstash - Metricbeat indice doesn't populate

I'm trying to get Metricbeat to populate data in Elastic but can't seem to figure out the missing pieces are. I currently have a Windows 2016 Server sandbox setup with Elastic, Kibana and Logstash on one node, abc, and on another node, xyz, I have Metricbeat running. X-pack security has been enabled and passwords have been set. I'm also able to get the metricbeat index pattern to populate but no data it seems. So overall I have Logstash connected to Elastic but no data getting through from what I can tell, if that makes sense.

My logstash.yml file is as such:

input {
  beats {
   port => 5044
  }
}

output {
 elasticsearch {
	hosts => "http://abc:9200"
	manage_template => true
     #hosts => "https://abc:9200"
       index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
       document_type => "%{[@metadata][type]}"
       user => "elastic"
       password => "Workthistime19"
	#ssl => false
    #cacert => '/path/to/cert.pem'
  }
}

Metricbeat.yml

metricbeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
  
setup.template.settings:
  index.number_of_shards: 0
  index.codec: best_compression

setup.dashboards.enabled: true

setup.kibana:
  host: "http://abc:5601"
  username: "kibana"
  password: "Workthistime19k"

output.logstash:
  hosts: "http://abc:5044"
  username: "logstash_system"
  password: "Workthistime19l"

logging.level: debug

logging.selectors: ["*"]

Logstash connection:

[2018-11-28T14:48:50,338][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2018-11-28T14:48:50,380][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x4da41b78 run>"}
[2018-11-28T14:48:50,544][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-11-28T14:48:50,590][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2018-11-28T14:48:51,171][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

Could you please post your full filebeat config?

Complete metricbeat.yml file

###################### Metricbeat Configuration Example #######################

# This file is an example configuration file highlighting only the most common
# options. The metricbeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/metricbeat/index.html

#==========================  Modules configuration ============================

metricbeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 0
  index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
setup.dashboards.enabled: true

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  host: "http://abc:5601"
  username: "kibana"
  password: "Pleasework18k"
  #setup.kibana.protocol: "https"
  #setup.kibana.path: /kibana
  
  #2018.11.23 config
  #ssl.enabled: true
  #ssl.key: C:\ProgramData\Elastic\Elasticsearch\config\certs\kiban-test\instance\instance.key
  #ssl.certificate: C:\ProgramData\Elastic\Elasticsearch\config\certs\kiban-test\instance\instance.crt
  #ssl.certificate_authorities: C:\ProgramData\Elastic\Elasticsearch\config\certs\test-run\ca\ca.crt
  

#============================= Elastic Cloud ==================================

# These settings simplify using metricbeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  #hosts: "http://abc:9200"
  #hosts: "https://abc:9200"

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "metricbeat"
  #password: "Pleasework18m"
  
  #Added on 2018/11/08 to test TLS\SSL settings:
  #output.elasticsearch.ssl.certificate_authorities: C:\Program Files\Metricbeat\certs\elastic-stack-ca.p12
  #output.elasticsearch.ssl.certificate: C:\Program Files\Metricbeat\certs\elastic-certificates.p12
  #output.elasticsearch.ssl.key: C:\Program Files\Metricbeat\certs\elastic-certificates.p12

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: "http://xx.xxx.xx.xxx:5044"
  username: "logstash_system"
  password: "Pleasework18l"
  
  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
logging.selectors: ["*"]

#============================== Xpack Monitoring ===============================
# metricbeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#xpack.monitoring.enabled: false

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line.
#xpack.monitoring.elasticsearch:

And did you enable to modules you want to collect? Please post those configs

I've been using the default System module and thats it for now. Just trying to get basic connectivity going.

# Module: system
# Docs: https://www.elastic.co/guide/en/beats/metricbeat/6.4/metricbeat-module-system.html

- module: system
  period: 10s
  metricsets:
    - cpu
    #- load
    - memory
    - network
    - process
    - process_summary
    #- core
    #- diskio
    #- socket
  process.include_top_n:
    by_cpu: 5      # include top 5 processes by CPU
    by_memory: 5   # include top 5 processes by memory

- module: system
  period: 1m
  metricsets:
    - filesystem
    - fsstat
  processors:
  - drop_event.when.regexp:
      system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib)($|/)'

- module: system
  period: 15m
  metricsets:
    - uptime

#- module: system
#  period: 5m
#  metricsets:
#    - raid
#  raid.mount_point: '/'

Excellent, I assume you enabled the module?

Yeah, I'm looking at the system.yml file in the modules.d folder right now and the only thing that doesnt have .disabled after it is the system.yml file. So system should be the only working module thats enabled right now.

Do I need to add input and output plugins to make this work or is that covered in the conf file?
Update: input and output plugins have been installed but still no indice populating.

Current status. Reading through this is seems like the inputs and outputs are working.

C:\logstash-6.5.0\bin>logstash.bat -f, --path.config C:\logstash-6.5.0\config\logstash.conf
Sending Logstash logs to C:/logstash-6.5.0/logs which is now configured via log4j2.properties
[2018-11-29T10:55:12,355][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-11-29T10:55:12,398][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.5.0"}
[2018-11-29T10:55:17,028][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch password=><password>, hosts=>[http://dch789lw5app.svc.ny.gov:9200], index=>"%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}", manage_template=>true, id=>"90566165c0bec981c75a1b741568e553eea7dbfb6c17fe5361fbbd3f047efae6", user=>"elastic", document_type=>"%{[@metadata][type]}", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_9c8d43d8-bdee-49a4-b19c-fabc93f8ca45", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, action=>"index", ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2018-11-29T10:55:17,182][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-11-29T10:55:18,265][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx@dch789lw5app.svc.ny.gov:9200/]}}
[2018-11-29T10:55:18,338][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx@dch789lw5app.svc.ny.gov:9200/, :path=>"/"}
[2018-11-29T10:55:18,868][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elastic:xxxxxx@dch789lw5app.svc.ny.gov:9200/"}
[2018-11-29T10:55:18,982][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-11-29T10:55:18,990][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-11-29T10:55:19,042][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://dch789lw5app.svc.ny.gov:9200"]}
[2018-11-29T10:55:19,085][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-11-29T10:55:19,138][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-11-29T10:55:20,035][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2018-11-29T10:55:20,094][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x5ca9a9aa run>"}
[2018-11-29T10:55:20,207][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-11-29T10:55:20,234][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2018-11-29T10:55:20,759][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

I can't confirm that this was the actual problem, outside of the fact that I hadn't installed the inputs and outputs, but I believe the issue stemmed from trying to incorporate "manually loading the template" steps into my installation. I worked around this by getting Metricbeat to talk directly to Elasticsearch so it populated an index and correct index pattern. Then I changed the logstash.output section of the metricbeat config and started up logstash. Worked like a charm.

For this configuration, you must load the index template into Elasticsearch manually because the options for auto loading the template are only available for the Elasticsearch output.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.