Kibana 4 and geoip

P.S. if this makes a difference, I was testing at first using logstash to gather and send documents directly to Elastic Search. I then switched to using filebeat to gather the files, sending them to logstash which in turn sends them to ES.

It's possible that the geoip stuff broke when I switched to using filebeat, although I'm not sure why that'd be the case! All I know is that at one point early on in my testing the geoip stuff worked fine in Kibana, now it doesn't.

      "location": {
        "type": "double"
      },

This should've said geo_point, not double. What does your Logstash configuration look like?

Here it is (albeit spread out over different files in conf.d)

input {
  beats {
    host => "10.0.3.1"
    port => 5044
  }
}
filter {
  if [type] == "nginx_access" {
    grok {
      patterns_dir => ["/home/csapp/.logstash/patterns"]
      match => { "message" => "%{NGINXACCESS}" }
    }
    date {
      match => [ "timestamp" , "dd/MMM/YYYY:HH:mm:ss Z" ]
    }
    geoip {
      source => "clientip"
      database => "/home/csapp/.logstash/GeoLiteCity.dat"
    }
  }
}
output {
  if [type] == "nginx_access" and [response] =~ /^(5\d\d|^4\d\d)/ {
    elasticsearch {
      hosts => ["*****:9200"]
      index => "nginx_access-%{+YYYY.MM.dd}"
      user => "admin"
      password => "*******"
    }
  }
}
 index => "nginx_access-%{+YYYY.MM.dd}"

The index template that ships with Logstash applies to logstash-* indexes only. Since you've changed the index name to not make this you have to point Logstash to an index template that does match the names of your indexes. You can just make a copy of the default template, adjust the index name pattern and configure the elasticsearch output to use your file instead.