Filebeat (v6.2.3) complaining about ES plugins (v6.2.3) missing

2018-03-21T16:45:48.131Z ERROR pipeline/output.go:74 Failed to connect: Connection marked as failed because the onConnect callback failed: Error loading pipeline for fileset nginx/access: This module requires the following Elasticsearch plugins: ingest-user-agent, ingest-geoip. You can install them by running the following commands on all the Elasticsearch nodes:
sudo bin/elasticsearch-plugin install ingest-user-agent
sudo bin/elasticsearch-plugin install ingest-geoip

All the ES nodes in the cluster filebeat is pointing at are 1) v6.2.3 and 2) have both plugins installed.

Plugins directory: /usr/share/elasticsearch/plugins
discovery-ec2

  • Plugin information:
    Name: discovery-ec2
    Description: The EC2 discovery plugin allows to use AWS API for the unicast discovery mechanism.
    Version: 6.2.3
    Native Controller: false
    Requires Keystore: false
    Extended Plugins: []
  • Classname: org.elasticsearch.discovery.ec2.Ec2DiscoveryPlugin
    ingest-geoip
  • Plugin information:
    Name: ingest-geoip
    Description: Ingest processor that uses looksup geo data based on ip adresses using the Maxmind geo database
    Version: 6.2.3
    Native Controller: false
    Requires Keystore: false
    Extended Plugins: []
  • Classname: org.elasticsearch.ingest.geoip.IngestGeoIpPlugin
    ingest-user-agent
  • Plugin information:
    Name: ingest-user-agent
    Description: Ingest processor that extracts information from a user agent
    Version: 6.2.3
    Native Controller: false
    Requires Keystore: false
    Extended Plugins: []
  • Classname: org.elasticsearch.ingest.useragent.IngestUserAgentPlugin

Can you please perform a GET _nodes/ingest request to Elasticsearch and share the output? This is what Filebeat uses to confirm that each node has the required ingest processors.

References

I grep'd out the relevant bits, if you need the whole output I can provide that. The kibana node is just a coordinator, non-master, non-data, non-ingest.

"cluster_name" : "enotes-logs",
"name" : "kibana",
"type" : "date_index_name"
"type" : "geoip"
"type" : "rename"
"type" : "user_agent"
"name" : "es-logs2",
"type" : "date_index_name"
"type" : "geoip"
"type" : "rename"
"type" : "user_agent"
"name" : "es-logs3",
"type" : "date_index_name"
"type" : "geoip"
"type" : "rename"
"type" : "user_agent"
"name" : "es-logs1",
"type" : "date_index_name"
"type" : "geoip"
"type" : "rename"
"type" : "user_agent"

well, having changed nothing on my end, i just tried starting the filebeat agents and they're no longer complaining. not sure what happened, but this is no longer an issue (for me, at least).

on another note, i've had hit-or-miss luck getting apache and nginx logs to actually get parsed properly in kibana. this must be a FAQ but are there any obvious places to look to figure out why that's not working? the logs are showing up, they just dont have the nginx/apache specific fields added

If the logs are showing up without being parsed then they should have an ingest error in them. This is the ingest pipeline for the nginx access logs - https://github.com/elastic/beats/blob/5485125ab2e0fb3f5f5e0657b379f531113d1cde/filebeat/module/nginx/access/ingest/default.json#L58-L63. When an error occurs it will set the error.message field.

And if you look at GET _nodes/stats/ingest it will tell you how many events have been processed through each pipeline and give the number of failures.

weird, not showing any failures on the ingest pipelines, yet one log message from the same host and same logfile will be parsed/have fields, and the next from the same host and same logfile, won't.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.