Geoip.location mapped as double

Hello, I've search for this problem and see it come up pretty often, but the solutions I've found don't seem to take effect for me. Basically, when I create a new index via a logstash config (daily) with a template, it doesn't seem to take effect for those indices. I have manage template set to false. It DOES seem to take effect for other indices that are created by other outputs. I set up the template that specified the geoip.location field to be geo_point, deleted the old index, restarted the cluster and logstash, and then generated new logs to re-create the index. It keeps coming up as showing geoip.location as double. I also tried creating a new index with the template applied, but I get the same result. Anybody have any idea what I'm doing wrong??

my logstash filter config:

if [path] =~ "access_log" {
                grok {
                        match => { "message" => "%{COMBINEDAPACHELOG}" }
                }
                if [clientip] {
                    geoip {
                        source => "clientip"
                        database => "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-filter-geoip-0.1.7/vendor/GeoLiteCity-2013-01-18.dat"
                    }
                }
        }

and here's the output:

        if [path] =~ "access_log" {
                elasticsearch {
                        cluster => "elasticsearchseftest"
                        manage_template => false
                        host => "hostname"
                        index => "sef.test.-%{+YYYY.MM.dd}"
                        protocol => "http"
                        user => "luser"
                        password => "password"
                        template => "/etc/logstash/conf.d/templates/logstash.json"
                        template_name => "sef.test.-*"
                }
        }

This is the template:

{
    "template": "sef.test.-*",
    "settings": {
        "index.refresh_interval": "5s"
    },
    "mappings": {
        "_default_" | {
            "_all" : {"enabled" : true},
            "dynamic_templates" : [ {
              "string_fields" : {
                "match" : "*",
                "match_mapping_type" : "string",
                "mapping" : {
                  "type" : "string", "index" : "analyzed", "omit_norms" : true,
                    "fields" : {
                      "raw" : ("type": "string", "index" : "not_analyzed", "ignore_above" : 256}
                    }
                }
              }
            } ],
            "properties" : {
                "@version": { "type": "string", "index": "not_analyzed" },
            }
            "geoip"  : {
                "type" : "object",
                    "dynamic" : true,
                "properties" : {
                    "location" : { "type" : "geo_point"}
                }
            }
        }
    }
}

Anybody have any idea what I'm doing wrong? I should also note, whenever the index is created, the mapping isn't default but logs, but I have no idea why that's the case

Having a wildcard in the template name is probably a bad idea. In fact, naming this something without punctuation is probably a good idea. The matching is in the template itself, so the name can be anything. I recommend not using special characters at all as the API will have to translate them to %20 and other such characters.

Not sure if this is your problem, but it might have something to do with it.

Thanks, I'll try that out! Do you have any idea where I can look to see why the mapping is called logs? Nowhere in the config that I can see should that be applied. I commented out everything to do with templates in the output, and set the name of the index to be created test.-%{+YYYY.MM.dd}, and moved the template file to a different directory just in case. When the new index gets created, I see the following when I do a get _mappings?pretty:

{
  "test.-2015.06.17" : {
    "mappings" : {
      "logs" : {
        "properties" : {
          "@timestamp" : {
            "type" : "date",
            "format" : "dateOptionalTime"
          },
          "@version" : {
            "type" : "string"
          },
          "agent" : {
            "type" : "string"
          },
          "auth" : {
            "type" : "string"
          },
          "bytes" : {
            "type" : "string"
          },
          "clientip" : {
            "type" : "string"
          },
          "geoip" : {
            "properties" : {
              "area_code" : {
                "type" : "long"
              },
              "city_name" : {
                "type" : "string"
              },
              "continent_code" : {
                "type" : "string"
              },
              "country_code2" : {
                "type" : "string"
              },
              "country_code3" : {
                "type" : "string"
              },
              "country_name" : {
                "type" : "string"
              },
              "dma_code" : {
                "type" : "long"
              },
              "ip" : {
                "type" : "string"
              },
              "latitude" : {
                "type" : "double"
              },
              "location" : {
                "type" : "double"
              },
              "longitude" : {
                "type" : "double"
              },
              "postal_code" : {
                "type" : "string"
              },
              "real_region_name" : {
                "type" : "string"
              },
              "region_name" : {
                "type" : "string"
              },
              "timezone" : {
                "type" : "string"
              }
            }
          },
          "host" : {
            "type" : "string"
          },
          "httpversion" : {
            "type" : "string"
          },
          "ident" : {
            "type" : "string"
          },
          "message" : {
            "type" : "string"
          },
          "path" : {
            "type" : "string"
          },
          "referrer" : {
            "type" : "string"
          },
          "request" : {
            "type" : "string"
          },
          "response" : {
            "type" : "string"
          },
          "syslog_facility" : {
            "type" : "string"
          },
          "syslog_facility_code" : {
            "type" : "long"
          },
          "syslog_severity" : {
            "type" : "string"
          },
          "syslog_severity_code" : {
            "type" : "long"
          },
          "timestamp" : {
            "type" : "string"
          },
          "verb" : {
            "type" : "string"
          }
        }
      }
    }
  },

I can't for the life of me figure out why this is the case??

Yep!

_default_ simply means "these mappings will be applied to all types not otherwise mapped." In Elasticsearch, one of the metadata fields is _type. Logstash assigns whatever you put in the type field to _type in the Elasticsearch output block. If you do not assign a value to type in Logstash, the default type will be logs, because all documents in Elasticsearch require a type.

Oh ok thanks for the explanation! Unfortunately, changing the template name had no effect. I can't figure out why the template isn't being applied to the newly created index!

You should install the Kopf plugin and view the index templates already in existence in your browser. This may help see what's going on more visually.

1 Like

Thanks for the suggestion! Using Kopf, I was able to create a mapping that matched the index name created with the template pattern. So with Kopf, is it doing a PUT to create that mapping? Any idea why using Logstash to create the index and apply the mapping in the .json file didn't work? I was under the impression that Logstash would do that when it created the Index the first time. There's an index template there that actually matches the pattern of one of the other indices, but I never explicitly created that. Not sure why that one was created, but the one I was testing on did not?

I wasn't actually suggesting you use Kopf to create the mapping, though I'm glad to hear that worked for you.

I was rather suggesting you look at the template in Kopf and see if that had issues, or if the template was accurately uploaded. It seems that you did look at that, and it suggests that the old template (or some other iteration) was not deleted properly. Elasticsearch will merge templates that both match the same index pattern. Without setting a priority, they may stomp on each other. This was why I suggested looking at what templates were there in Kopf.

Ah ok. When I looked in Kopf, the template that I tried to set with the json file didn't actually appear in the list. I'm guessing it did merge, but none of the other index patterns are the same? There were 2 templates, one called logstash with the prod-* pattern, and one for marvel. Will Elasticsearch merge templates with the same file name, or does it only care about index/template pattern? That's the only thing I can think might have happened, because in Kopf it shows Logstash as the name of the template, and the name of the json file is Logstash as well.

That was my suspicion, that the template did not get uploaded. The merging happens between actual, uploaded templates. Elasticsearch does not look at the data you're importing and automagically merge with another, similar template.

Why didn't the template upload? Perhaps the JSON was not complete, or the name collided with another template of the same name (and overwrite was not set). Perhaps the name didn't work. You could always do

curl -XPUT localhost:9200/_template/NAME -d @filename.json

Where NAME is the template name, and filename.json is the template you were trying to upload through Logstash. If it fails, you'll see the error message indicating why.