How to set up Filebeat to send geoip-info to our hosted instance?

I am in the process of creating a POC presentation for my management team hoping to show the benefits and advantages of this approach as compared to what we are currently using which is a combination of AppDynamics, the Elastic Stack, AWS Monitoring and Uptrends. What I hope to show is how we can use the Elastic stack as a single point for the monitoring and analysis of our infrastructure.

One of the items that I am having issues with is getting Filebeat to send geoip-info to the hosted environment. In our current on premise solution it's easy enough to set the pipeline property to 'geoip-info' as part of the Elasticsearch output configuration but since we're pointing to our hosted environment everything in that area is being ignored.

So my question is how do I pass that geoip-info from our local filebeat install to the hosted environment?

Thanks,
Bill

Not sure I follow here. Why does that mean it's being ignored? What is different with the config?

I thing I misspoke here are a couple of excerpts from our filebeat.yml file --

#============================= Elastic Cloud ==================================

#== These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).

#== The cloud.id setting overwrites the output.elasticsearch.hosts and
#== setup.kibana.host options.
#== You can find the cloud.id in the Elastic Cloud web UI.
cloud.id: xxxx

#== The cloud.auth setting overwrites the output.elasticsearch.username and
#== output.elasticsearch.password settings. The format is <user>:<pass>.
cloud.auth: xxxx

#================================ Outputs =====================================

#== Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
#== Array of hosts to connect to.
hosts: ["http://localhost:9200/"]
pipeline: geoip-info
index: "filebeat-%{[fields.doc_type]}-%{+yyyy.MM.dd}"
setup.template.name: xxxx
setup.template.pattern: xxxx
setup.template.name: xxxx
setup.template.pattern: xxxx

After posting this I was reading further on the topic and realized that the only thing that gets ignored is the 'hosts' property.

I've defined the geoip-info processer with this following the process outlined in https://www.elastic.co/guide/en/beats/filebeat/7.11/filebeat-geoip.html

PUT _ingest/pipeline/geoip-info
{
  "description": "Add geoip info",
  "processors": [
    {
      "geoip": {
        "field": "client.ip",
        "target_field": "client.geo",
        "ignore_missing": true
      }
    },
    {
      "geoip": {
        "field": "source.ip",
        "target_field": "source.geo",
        "ignore_missing": true
      }
    },
    {
      "geoip": {
        "field": "destination.ip",
        "target_field": "destination.geo",
        "ignore_missing": true
      }
    },
    {
      "geoip": {
        "field": "server.ip",
        "target_field": "server.geo",
        "ignore_missing": true
      }
    },
    {
      "geoip": {
        "field": "host.ip",
        "target_field": "host.geo",
        "ignore_missing": true
      }
    }
  ]
}

I'm still not seeing any geoip info being sent through from filebeat

Thanks,
Bill

Likely because of;

And it looks like you have the cloud.id set.

Couple things...

it is hard to tell but is your yml properly formatted (in the future please format yml code with the </> button.

Are the settings under output.elasticsearch properly formatted?
Is the expected index name being used?
Are the documents showing up at all or just not the geoip fields?
Did you create your own mapping? with the correct mapping types?

Yes the yml is formatted correctly and I'll remember that hint for the future.

I actually took a filebeat.yml file from our on premise installation which of course doesn't have the cloud section filled out. The on premise configuration is working of course.

The transaction payloads are being sent from filebeat are being sent in to hosted elasticsearch but with out the geoip-info information included.

Thanks,
Bill

One thing you can do ... is just get the elasticsearch endpoint from the cloud console and just fill it in with the username and password under the output.elasticsearch section just like you would on prem. I suspect there is something simple missed.

Don't forget to comment out the cloud.id and cloud.auth again

It will look something like this

https://1234564789cb3bd74c467.us-west1.gcp.cloud.es.io:9243

You can take off the port and it will automatically use 443.

that was you can configure it exactly like on prem... give it a try and let us know.

Curious What Version you are using?

One other thought is to actually remove the hosts: ["http://localhost:9200/"] line when using cloud.id I have beats using pipelines with cloud.id but I don't leave that line it, perhaps there is a bug with that.

Sorry for the delay in responding but as part of Production Support I often get called off on other issues.

So for your first recommendation I commented out the cloud.id and cloud.auth lines and configured the elastic.output host value to be that of my Elastic endpoint and started Filebeat. I saw an error where it couldn't connect to the endpoint - access denied. So I uncommented out the two cloud values from above and restarted filebeat and this time the connection was successfully made but I wasn't seeing any transactions in the Discover section. Going into the Metrics section I could see the filebeat server was processing transactions but they weren't getting indexed into Elasticsearch.

I changed the host back to http://localhost:9200 which the architect that I'm working with for a Proof of Concept recommended as he stated all it will do when it hits that line is throw an error that will be ignored anyway. I started Filebeat up again and traffic is being indexed into Elasticsearch again.

-Bill

Did you add the username and password fields to output.elasticsearch section.... you would have to do that.

Not sure I am quite following / what the issue was But sound like you are working... good

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.