2018-03-21T16:45:48.131Z ERROR pipeline/output.go:74 Failed to connect: Connection marked as failed because the onConnect callback failed: Error loading pipeline for fileset nginx/access: This module requires the following Elasticsearch plugins: ingest-user-agent, ingest-geoip. You can install them by running the following commands on all the Elasticsearch nodes:
sudo bin/elasticsearch-plugin install ingest-user-agent
sudo bin/elasticsearch-plugin install ingest-geoip
All the ES nodes in the cluster filebeat is pointing at are 1) v6.2.3 and 2) have both plugins installed.
Plugin information:
Name: discovery-ec2
Description: The EC2 discovery plugin allows to use AWS API for the unicast discovery mechanism.
Version: 6.2.3
Native Controller: false
Requires Keystore: false
Extended Plugins: []
Plugin information:
Name: ingest-geoip
Description: Ingest processor that uses looksup geo data based on ip adresses using the Maxmind geo database
Version: 6.2.3
Native Controller: false
Requires Keystore: false
Extended Plugins: []
Plugin information:
Name: ingest-user-agent
Description: Ingest processor that extracts information from a user agent
Version: 6.2.3
Native Controller: false
Requires Keystore: false
Extended Plugins: []
Can you please perform a GET _nodes/ingest request to Elasticsearch and share the output? This is what Filebeat uses to confirm that each node has the required ingest processors.
I grep'd out the relevant bits, if you need the whole output I can provide that. The kibana node is just a coordinator, non-master, non-data, non-ingest.
well, having changed nothing on my end, i just tried starting the filebeat agents and they're no longer complaining. not sure what happened, but this is no longer an issue (for me, at least).
on another note, i've had hit-or-miss luck getting apache and nginx logs to actually get parsed properly in kibana. this must be a FAQ but are there any obvious places to look to figure out why that's not working? the logs are showing up, they just dont have the nginx/apache specific fields added
weird, not showing any failures on the ingest pipelines, yet one log message from the same host and same logfile will be parsed/have fields, and the next from the same host and same logfile, won't.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.