Can't get geoip to work

All,

Despite many, many attempts, I've not been able to make geoip work. I don't see the "clientip" field in any of my indices, nor is geoip an option when trying to create a Coordinate Map | Geo Coordinates Bucket | Aggregation Geohash | Field (geoip.location is available, but geo_point is grey-out, and the visualization produces no output). I have three conf files (input, filter, output), which I've posted here filter is very long; I've posted only what I believe is the relevant part): https://pastebin.com/SKVfQmBW .

This has made me crazy for the longest time, and I hope someone can help get me straightened away (with very detailed instructions, if you'd be kind enough). Please let me know if you require any additional information.

Many thanks.

Oh, and this is from my filebeat.yml:

  • type: log

    enabled: true
    document_type: apache
    paths:

    • /var/log/httpd/access_log
      fields:
      log_type: apache

The apache-2 module is enabled.

You've got a lot wrong going on here.

  1. You're harvesting your apache logs via Filebeat log input
  2. Dont use document_type
  3. Then you said you have the apache2 module enabled (you should not be harvesting the log using both methods)
  4. if you dont need the additional event processing power of logstash, the Filebeat apache2 module and its ingest pipelines should do everything you want.

Thanks for the reply, phil. I'm not surprised that I have a log wrong going on. Hence the fact that it doesn't work.

To fix, do I remove all of the geoip-related stuff in my logstash configs? If not, what specifically do I need to do?

Again, thanks.

Not necessary to remove the logstash config just yet. Just follow the documentation that I linked to for the Filebeat Apache2 module to configure it. You'll need to remove the filebeat.yml settings that are harvesting the same logs as the Apache2 module. Then just configure the output to be Elasticsearch instead of Logstash and lets see what you get. Make sure you have the ingest-user-agent and ingest-geoip plugins installed. Docs here

Unfortunately, things now seem to really be going sideways. On one of my linux hosts, I changed filebeat.yml to use elasticsearch, rather than logstash, as follows:

output.elasticsearch:
# Array of hosts to connect to.
hosts: ["10.0.101.101:9200"]

# Optional protocol and basic auth credentials.
protocol: "https"
username: "elastic"
password: "mypassword"

But now, I see the following in the filebeat log:

2018-12-12T12:54:27.386-0500 ERROR instance/beat.go:824 Exiting: Error importing Kibana dashboards: fail to create the Elasticsearch loader: Error creating Elasticsearch client: Couldn't connect to any of the configured Elasticsearch hosts. Errors: [Error connection to Elasticsearch http://10.0.101.101:9200: Get http://10.0.101.101:9200: dial tcp 10.0.101.101:9200: connect: connection timed out]
Exiting: Error importing Kibana dashboards: fail to create the Elasticsearch loader: Error creating Elasticsearch client: Couldn't connect to any of the configured Elasticsearch hosts. Errors: [Error connection to Elasticsearch http://10.0.101.101:9200: Get http://10.0.101.101:9200: dial tcp 10.0.101.101:9200: connect: connection timed out]

Then, filebeat dies. Huh?

Well, your Logstash config is using http for its Elasticsearch output. I assume you haven't changed this requirement, so your Filebeat.yml file should be using http as well. You have it set to https.

I previously changed it to http, and it still failed. I actually copied the error (above) from when it was set to http.

What is the elasticsearch.yml file configured for in the "network" section? In particular the following: network.host, http.port. Any firewall between the two that are blocking communication? Do you have xpack security enabled? if you do, please list the relevant settings, i.e. xpack.security.http.ssl *

  • Also...very bad practice to use the "elastic" user for your Filebeat config. That is a special privileged account that has superuser access. You should follow the Filebeat setup instructions and create a filebeat_internal user.

elasticsearch.yml:

---------------------------------- Network -----------------------------------

Set the bind address to a specific IP (IPv4 or IPv6):

network.host: 10.0.101.101

Set a custom port for HTTP:

http.port: 9200

For more information, consult the network module documentation.

No firewall issues.

x-pack security is not enabled.

See my previous comment edits. From the looks of your elasticsearch config, you should be using http and not https. If xpack security is not enabled, then you shouldn't be using basic auth credentials in your Filebeat config.

Not sure now if I should be posting to the filebeat forum, but I think I'm making some progress here, with bigphil's help.

A test node is now set up to input filebeat stuff to elasticsearch, rather than logstash. I've commented out any filebeat.inputs regarding httpd in filebeat.yml (along with an additional small tweak or two). The filebeat apache2 module is enabled on the test host. The ingest-user-agent and ingest-geoip plugins are installed on the elastic cluster. Everything seems to be working fine, with regard to the test node sending logs/elasticsearch receiving the logs/logs showing in Kibana. However, geoip still doesn't seem to work. It doesn't seem like the httpd access_log information is received, for one thing.

I would still greatly appreciate help. I must be close on this one. Of course, I'll provide any other information you deem important.

Thanks!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.