Despite many, many attempts, I've not been able to make geoip work. I don't see the "clientip" field in any of my indices, nor is geoip an option when trying to create a Coordinate Map | Geo Coordinates Bucket | Aggregation Geohash | Field (geoip.location is available, but geo_point is grey-out, and the visualization produces no output). I have three conf files (input, filter, output), which I've posted here filter is very long; I've posted only what I believe is the relevant part): https://pastebin.com/SKVfQmBW .
This has made me crazy for the longest time, and I hope someone can help get me straightened away (with very detailed instructions, if you'd be kind enough). Please let me know if you require any additional information.
Then you said you have the apache2 module enabled (you should not be harvesting the log using both methods)
if you dont need the additional event processing power of logstash, the Filebeat apache2 module and its ingest pipelines should do everything you want.
Not necessary to remove the logstash config just yet. Just follow the documentation that I linked to for the Filebeat Apache2 module to configure it. You'll need to remove the filebeat.yml settings that are harvesting the same logs as the Apache2 module. Then just configure the output to be Elasticsearch instead of Logstash and lets see what you get. Make sure you have the ingest-user-agent and ingest-geoip plugins installed. Docs here
Unfortunately, things now seem to really be going sideways. On one of my linux hosts, I changed filebeat.yml to use elasticsearch, rather than logstash, as follows:
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["10.0.101.101:9200"]
Well, your Logstash config is using http for its Elasticsearch output. I assume you haven't changed this requirement, so your Filebeat.yml file should be using http as well. You have it set to https.
What is the elasticsearch.yml file configured for in the "network" section? In particular the following: network.host, http.port. Any firewall between the two that are blocking communication? Do you have xpack security enabled? if you do, please list the relevant settings, i.e. xpack.security.http.ssl *
Also...very bad practice to use the "elastic" user for your Filebeat config. That is a special privileged account that has superuser access. You should follow the Filebeat setup instructions and create a filebeat_internal user.
See my previous comment edits. From the looks of your elasticsearch config, you should be using http and not https. If xpack security is not enabled, then you shouldn't be using basic auth credentials in your Filebeat config.
Not sure now if I should be posting to the filebeat forum, but I think I'm making some progress here, with bigphil's help.
A test node is now set up to input filebeat stuff to elasticsearch, rather than logstash. I've commented out any filebeat.inputs regarding httpd in filebeat.yml (along with an additional small tweak or two). The filebeat apache2 module is enabled on the test host. The ingest-user-agent and ingest-geoip plugins are installed on the elastic cluster. Everything seems to be working fine, with regard to the test node sending logs/elasticsearch receiving the logs/logs showing in Kibana. However, geoip still doesn't seem to work. It doesn't seem like the httpd access_log information is received, for one thing.
I would still greatly appreciate help. I must be close on this one. Of course, I'll provide any other information you deem important.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.