Metricbeat not sending information to Logstash

Metricbeat not sending information to Logstash

Output:
Sending Logstash's logs to C:/Users/328347935/Downloads/EKL/logstash-6.2.2/logstash-6.2.2/logs which is now configured v
ia log4j2.properties
[2018-03-21T09:23:23,018][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>
"C:/Users/328347935/Downloads/EKL/logstash-6.2.2/logstash-6.2.2/modules/fb_apache/configuration"}
[2018-03-21T09:23:23,042][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"C
:/Users/328347935/Downloads/EKL/logstash-6.2.2/logstash-6.2.2/modules/netflow/configuration"}
[2018-03-21T09:23:23,263][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or
command line options are specified
[2018-03-21T09:23:23,940][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.2.2"}
[2018-03-21T09:23:24,505][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2018-03-21T09:23:27,309][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=

4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-03-21T09:23:27,719][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[
], :added=>[http://localhost:9200/]}}
[2018-03-21T09:23:27,732][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connect
ion is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-03-21T09:23:27,931][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://local
host:9200/"}
[2018-03-21T09:23:27,993][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>nil}
[2018-03-21T09:23:27,997][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event fiel
d won't be used to determine the document _type {:es_version=>6}
[2018-03-21T09:23:28,015][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-03-21T09:23:28,040][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"tem
plate"=>"logstash-", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"default"=>{"dynami
c_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "
norms"=>false}}}, {"string_fields"=>{"match"=>"
", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>
false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"
}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=
"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-03-21T09:23:28,109][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::E
lasticSearch", :hosts=>["http://localhost:9200"]}
[2018-03-21T09:23:28,523][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:50
44"}
[2018-03-21T09:23:28,588][INFO ][logstash.pipeline ] Pipeline started succesfully {:pipeline_id=>"main", :thread=
"#<Thread:0x342103a1 run>"}
[2018-03-21T09:23:28,650][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2018-03-21T09:23:28,733][INFO ][logstash.agent ] Pipelines running {:count=>1, :pipelines=>["main"]}

Can't see the output

Metricbeat file
metricbeat.modules:

  • module: system
    metricsets:
    • cpu
    • filesystem
    • memory
    • network
    • process
      enabled: true
      period: 10s
      processes: ['.*']
      cpu_ticks: false
      #_source.enabled: false

setup.kibana:

output.logstash:

The Logstash hosts

hosts: ["localhost:5044"]

Optional SSL. By default is off.

List of root certificates for HTTPS server verifications

ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

Certificate for SSL client authentication

ssl.certificate: "/etc/pki/client/cert.pem"

Client Certificate Key

ssl.key: "/etc/pki/client/cert.key"

config file:
input {
beats {
port => 5044
}
}

output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "metricbeatstart"
}
}

Please format your code, logs or configuration files using </> icon as explained in this guide and not the citation button. It will make your post more readable.

Or use markdown style like:

```
CODE
```

There's a live preview panel for exactly this reasons.

Lots of people read these forums, and many of them will simply skip over a post that is difficult to read, because it's just too large an investment of their time to try and follow a wall of badly formatted text.
If your goal is to get an answer to your questions, it's in your interest to make it as easy to read and understand as possible.
Please update your post.

these are my files metricbeat.yml and test.config

metricbeat.yml fie

metricbeat.modules:
- module: system
  metricsets:
    - cpu
    - filesystem
    - memory
    - network
    - process
  enabled: true
  period: 10s
  processes: ['.*']
  cpu_ticks: false

setup.kibana:

output.logstash:
  hosts: ["localhost:5044"]

test.config file:

input {
  beats {
    port => 5044
	ssl => true
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
	"@metadata": { 
		"beat": "metricbeat", 
		"version": "6.2.2" 
		"type": "doc"
	}
}

output {
  elasticsearch {
    hosts => ["http://localhost:9200"]
	sniffing => true
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

May be remove:

setup.kibana:

Anyway, you should use stdout as a output plugin so you make sure nothing is received by logstash.

BTW I moved your question to #logstash

Thank you so much

I even tried this

input {
  beats {
    port => 5044
	}
}

output {
  elasticsearch {
    hosts => ["http://localhost:9200"]
	sniffing => true
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
  
  stdout{codec => rubydebug}
}

Output:
Sending Logstash's logs to C:/Users/328347935/Downloads/EKL/logstash-6.2.2/logstash-6.2.2/logs which is now configured v
ia log4j2.properties
[2018-03-21T10:55:50,603][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>
"C:/Users/328347935/Downloads/EKL/logstash-6.2.2/logstash-6.2.2/modules/fb_apache/configuration"}
[2018-03-21T10:55:50,617][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"C
:/Users/328347935/Downloads/EKL/logstash-6.2.2/logstash-6.2.2/modules/netflow/configuration"}
[2018-03-21T10:55:50,817][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or
command line options are specified
[2018-03-21T10:55:51,596][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.2.2"}
[2018-03-21T10:55:52,208][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2018-03-21T10:55:54,833][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_typ
e" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the f
uture. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feat
ure If you have any questions about this, please visit the logstash channel on freenode irc. {:name=>"document_type", :
plugin=><LogStash::Outputs::Elasticsearch hosts=>[http://localhost:9200], sniffing=>true, manage_template=>false, index=

"%{[@metadata][beat]}-%{+YYYY.MM.dd}", document_type=>"%{[@metadata][type]}", id=>"031c8a9ad82d4a7de81000014978378b1f8e
eb25fac35f5e6a648ad68f686fec", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_c080b35f-7b8c-4327-a640-b
df1cc44809c", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false,
doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false,
retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, action=>"index", ssl_certificate_verification=>
true, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inacti
vity=>10000, http_compression=>false>}
[2018-03-21T10:55:54,989][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=
4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-03-21T10:55:55,447][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[
], :added=>[http://localhost:9200/]}}
[2018-03-21T10:55:55,457][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connect
ion is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-03-21T10:55:55,636][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://local
host:9200/"}
[2018-03-21T10:55:55,698][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>nil}
[2018-03-21T10:55:55,703][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event fiel
d won't be used to determine the document _type {:es_version=>6}
[2018-03-21T10:55:55,728][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::E
lasticSearch", :hosts=>["http://localhost:9200"]}
[2018-03-21T10:55:56,173][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2018-03-21T10:55:56,262][INFO ][logstash.pipeline ] Pipeline started succesfully {:pipeline_id=>"main", :thread=
"#<Thread:0x4d13b97b run>"}
[2018-03-21T10:55:56,317][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2018-03-21T10:55:56,425][INFO ][logstash.agent ] Pipelines running {:count=>1, :pipelines=>["main"]}
[2018-03-21T10:56:00,796][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[http://localhost:9200/], :added=>[http://127.0.0.1:9200/]}}
[2018-03-21T10:56:00,799][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://127.0.0.1:9200/, :path=>"/"}
[2018-03-21T10:56:00,819][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://127.0.0.1:9200/"}

I figured out

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.