I have a problem with recieving logs via logstash from filebeat. I can recieve them directly to elasticsearch but not via Logastash.
Logstash conf.d input file:
input {
beats {
type => "filebeat"
port => "5044"
}
}
Filebeat config:
#================================ Outputs =====================================
# Configure what output to use when sending the data collected by the beat.
#-------------------------- Elasticsearch output ------------------------------
# output.elasticsearch:
# Array of hosts to connect to.
# hosts: ["http://192.168.0.160:9200"]
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["192.168.0.160:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
I don't find the problem in the logs. How can I test the connection?
Later I want to add this filter: https://gist.github.com/nagyv/398b0f9d0cb4061b6068 for Odoo logs. But I'm far away from that. The netflow module is working fine this is also configured in Logstash.
Do you have xpack enabled on your ES cluster or Logstash? With the connection refused error you may need to pass your username and password credentials for ES.
If you've created a username and password for logstash_internal try passing those credentials through in your output config. The failed to connect message is whats most important to troubleshot. Also, take a look at what indices your logstash_writer role has access to. Your probably going to have to add filebeat* to its permissions.
I have not created anyhting with X-Pack or a role in kibana. So this is probably not the problem. I think that Logstash is not starting the pipeline.
[2019-04-23T17:28:21,020][WARN ][logstash.runner ] SIGTERM received. Shutting down.
[2019-04-23T17:28:21,928][INFO ][logstash.javapipeline ] Pipeline terminated {"pipeline.id"=>"module-netflow"}
[2019-04-23T17:28:52,557][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-04-23T17:28:52,576][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.0.0"}
[2019-04-23T17:28:54,015][INFO ][logstash.config.modulescommon] Starting the netflow module
[2019-04-23T17:29:16,769][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2019-04-23T17:29:17,100][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2019-04-23T17:29:17,182][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
[2019-04-23T17:29:17,188][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-04-23T17:29:17,294][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2019-04-23T17:29:18,175][INFO ][logstash.filters.geoip ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.0-java/vendor/GeoLite2-ASN.mmdb"}
[2019-04-23T17:29:18,234][INFO ][logstash.filters.geoip ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.0-java/vendor/GeoLite2-City.mmdb"}
[2019-04-23T17:29:18,242][INFO ][logstash.filters.geoip ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.0-java/vendor/GeoLite2-City.mmdb"}
[2019-04-23T17:29:18,840][INFO ][logstash.filters.geoip ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.0-java/vendor/GeoLite2-ASN.mmdb"}
[2019-04-23T17:29:18,917][INFO ][logstash.filters.geoip ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.0-java/vendor/GeoLite2-ASN.mmdb"}
[2019-04-23T17:29:18,921][INFO ][logstash.filters.geoip ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.0-java/vendor/GeoLite2-City.mmdb"}
[2019-04-23T17:29:19,455][INFO ][logstash.filters.geoip ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.0-java/vendor/GeoLite2-City.mmdb"}
[2019-04-23T17:29:19,679][INFO ][logstash.filters.geoip ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.0-java/vendor/GeoLite2-ASN.mmdb"}
[2019-04-23T17:29:19,752][INFO ][logstash.javapipeline ] Starting pipeline {:pipeline_id=>"module-netflow", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, :thread=>"#<Thread:0x6d07a31 run>"}
[2019-04-23T17:29:19,994][INFO ][logstash.javapipeline ] Pipeline started {"pipeline.id"=>"module-netflow"}
[2019-04-23T17:29:20,246][INFO ][logstash.inputs.udp ] Starting UDP listener {:address=>"0.0.0.0:2055"}
[2019-04-23T17:29:20,296][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:"module-netflow"], :non_running_pipelines=>[]}
[2019-04-23T17:29:20,599][INFO ][logstash.inputs.udp ] UDP listener started {:address=>"0.0.0.0:2055", :receive_buffer_bytes=>"212992", :queue_size=>"2000"}
[2019-04-23T17:29:21,407][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-04-23T17:30:02,782][WARN ][logstash.codecs.netflow ] Can't (yet) decode flowset id 1024 from source id 0, because no template to decode it with has been received. This message will usually go away after 1 minute.
How Can I check if the input 5044 is working and the pipeline is up and running?
This is going back a way for me but if I remember correctly I had to copy my config and copy it into the Pipelines tool in Kibana to get things working. I'm not sure why that worked but it did. After that I had to make sure that the logstash_writer role, assuming you've created one, had permissions to write, delete and create indices for the specific beat.
I think this is the real problem i have. It's not picking up the config files. The netflow module is working fine and I have in conf.d the configuration which it should pick. At the moment pipelines.yml is commented out.
This all happened because of the netflow module which i had activated. Does anyone actually know how the netflow module should be started and configured. (It would be nice stept by step on CentOS 7)? @theuntergeek and @guyboertje
I'm sorry for adding you in this but guyboertje this is a information problem look at:
When I deactivate the netflow module i recieve the filebeat notfications. So the question is, how to start and configure the netflow module.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.