Getting geohash / geoip data from apache

Im mining apache2 logs using filebeat. Data is being put into indices 'apache2-' via logstash.

If I look at the data using the 'discover' function I can see 'geoip.location.lat/lon' and 'location.lat/lon' but attempts to use this data in kibana are fruitless. Creating a map visualization gives me the error message "No Compatible Fields: The "apache2*" index pattern does not contain any of the following field types: geo_point".

What do I need to do with my data to use it this way? Have I made a configuration error along the line somewhere?

What is the mapping for the index in question?

Here's one:

      "geoip" : {
        "location": {
          "properties": {
            "lat": {
              "type": "float"
            },
            "lon": {
              "type": "float"
            }
          }
        },

Ok, you need those combined and mapped as a single geopoint.

Are you using the geoip filter in Logstash? What does the config look like?

That mapping is incorrect. location would be a property of geoip. Not only that, but as @warkolm said...that needs to be collapsed into a geo_point type.

      "geoip" : {
        "dynamic" : true,
        "properties" : {
          "ip" : {
            "type" : "ip"
          },
          "location" : {
            "type" : "geo_point"
          },
          "latitude" : {
            "type" : "half_float"
          },
          "longitude" : {
            "type" : "half_float"
          }
        }
      } 

if you're using the geoip filter in logstash, this will work. What may be the easiest thing to do is just copy the logstash index template and create a new template that matches your index pattern. i.e.

Using the Dev console

GET _template/logstash

copy the json output. Edit the index_patterns setting to be something like "apache2-*" to match your new index, then

PUT _template/logstash-apache

paste the copied json with new index pattern directly below the PUT and run it.

Now when a new apache2-* index is created, it should use this mapping and be ready to go if you're using the geoip filter in Logstash

1 Like

Ok! I followed your (great) instructions and created a new template. (BTW --> I looked high and low in the online documentation and didn't find this information.. hint). Once it was created I took the further step of deleting the existing apache2 logs from elastic and kibana.

A new log entry for apache2 was promptly created but it doesn't seem to have the appropriate data type:

      "geoip" : {
        "properties" : {
          "city_name" : {
            "type" : "text",
            "fields" : {
              "keyword" : {
                "type" : "keyword",
                "ignore_above" : 256
              }
            }
          },
          "continent_code" : {
            "type" : "text",
            "fields" : {
              "keyword" : {
                "type" : "keyword",
                "ignore_above" : 256
              }
            }
          },
          "country_code2" : {
            "type" : "text",
            "fields" : {
              "keyword" : {
                "type" : "keyword",
                "ignore_above" : 256
              }
            }
          },
          "country_code3" : {
            "type" : "text",
            "fields" : {
              "keyword" : {
                "type" : "keyword",
                "ignore_above" : 256
              }
            }
          },
          "country_name" : {
            "type" : "text",
            "fields" : {
              "keyword" : {
                "type" : "keyword",
                "ignore_above" : 256
              }
            }
          },
          "dma_code" : {
            "type" : "long"
          },
          "ip" : {
            "type" : "text",
            "fields" : {
              "keyword" : {
                "type" : "keyword",
                "ignore_above" : 256
              }
            }
          },
          "latitude" : {
            "type" : "float"
          },
          "location" : {
            "properties" : {
              "lat" : {
                "type" : "float"
              },
              "lon" : {
                "type" : "float"
              }
            }
          },
          "longitude" : {
            "type" : "float"
          },
          "postal_code" : {
            "type" : "text",
            "fields" : {
              "keyword" : {
                "type" : "keyword",
                "ignore_above" : 256
              }
            }
          },
          "region_code" : {
            "type" : "text",
            "fields" : {
              "keyword" : {
                "type" : "keyword",
                "ignore_above" : 256
              }
            }
          },
          "region_name" : {
            "type" : "text",
            "fields" : {
              "keyword" : {
                "type" : "keyword",
                "ignore_above" : 256
              }
            }
          },
          "timezone" : {
            "type" : "text",
            "fields" : {
              "keyword" : {
                "type" : "keyword",
                "ignore_above" : 256
              }
            }
          }
        }

What is that json from? The index mapping or the template? If its the mapping maybe you have multiple templates and one is incorrect? Or your original logstash template you copied is not correct? The geoip.location field in your original logstash template should look like what I pasted unless you had previously modified it. Double check the templates.

The JSON is from GET /apache2-2018.12.12/_mapping.

The only modifications i made when creating the index template were to rename it and remove the "logstash" property so it would be accepted.

Im sure this is a UFU - just need to figure out what step I missed.

Are you using the Apache module in Filebeat or just pointing Filebeat to the Apache logs and then parsing with Logstash? Unless there is additional event processing that is only available in Logstash, its easier to just use the Filebeat Apache module in Filebeat. The work is already all done for you via the pre-built ingest pipelines. Did you double check the template to make sure its correct?

GET _template/logstash*

Assuming your new template begins with logstash.

I checked the template before I copied it over. Indeed it does include the geoip data type in question.

Am using the apache2 module in filebeat.

In the logstash config - do I need to specify anything in particular so the indexes will use the template?

Interesting...well, the Apache module in Filebeat should already be taking care of the geoip information for you...it's built into the ingest pipleline. You need to make sure the ingest-geoip and ingest-user-agent plugins are installed on your Elasticsearch instances. Just use the default index names at first to see if you can get the data in correctly. It's pointless to send the data to a Logstash instance unless you need additional event processing that's only available via Logstash. The ingest node in Elastic has quite a few processors already and if using one of the built-in modules, all of the work is done for you already! Set your Filebeat output to poing to an Elastic host instead, using the default index names and lets see if it works.

Is there a way to determine whether or not the template is being used?

At index creation time, any templates that match your index pattern will be loaded (in order from lowest to highest order number).

1 Like

Solved!

I updated the logstash config to include '"template_name" => "apache2"' in the output section. Looking at the data confirms that Im getting geoip points now.

Thank you for sticking with me on this. Your help was invaluable.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.