Change GEO.LOCATION to GEO_POINT data type

All,

I am collecting syslogs from my Cisco ASA firewall. Logstash is getting info just fine. I can discover and visualize current data as expected.

I cannot get geo mapping to work and cannot find a concise, understandable to me method to change the geolocation from a number type to geo_point. Because this field is not geo_point I cannot create geo maps of geo data.

How do I make this field into a GEO_POINT?

Thanks!!!
-Mike M

You need to look at your template/mapping and make sure things are aligned.
So show what you have already (_template or _mapping endpoints) and we can go from there.

I am unsure how to determine this but will begin educating myself now. Thanks for the tip!

I will report back.

Thanks!!!
-Mike M

By issuing the command:
curl -XGET localhost:9200/_template

I can see the following in the results:
"location":{"type":"geo_point","doc_values":true}

But does the index pattern of that template match your index name?

Not sure how to tell...

You've seen https://www.elastic.co/guide/en/elasticsearch/reference/5.1/indices-templates.html?

In the first example, this part "template": "te*", is a pattern that matches the index name. So an index of name test or template will all match but today-20171231 won't.

So if you are looking at the logstash-* template and your index is cisco-$DATE, then that would explain things.

Using Senese I see:

And further down:

My confusion is around how to:
-Apply a template at index creation time
-Modify the template

Your time and input is very appreciated.

Thanks!
-Mike M

I made these changes based on the above. Should this logstash.conf do it? Kibana discovery is not showing any records at all for logstash-*

input {
  beats {
    port => 5044
  }
  udp {
    host => "127.0.0.1"
    port => 10514
    codec => "json"
    type => "logstash"
  }
  beats {
    port => 5045
    type => "ex_msg_trk"
  }
  file {
    path => "/var/log/remote-hosts/BRT1-VHOST1.RETREATHEALTHCARE.ORG/BRT1-VHOST1.RETREATHEALTHCARE.ORG-20170127.log"
    start_position => "beginning"
  }
}

filter {

if [type] == "ex_msg_trk" {

grok {
    match => { "message" => "(%{TIMESTAMP_ISO8601:date-time})?,(%{IPORHOST:client-ip})?,(%{IPORHOST:client-hostname})?,(%{IPORHOST:server-ip})?,(%{IPORHOST:server-hostname})?,(%{GREEDYDATA:source-context})?,(
%{GREEDYDATA:connector-id})?,(%{WORD:source})?,(%{WORD:event-id})?,(%{NUMBER:internal-message-id})?,(%{GREEDYDATA:message-id})?,(%{GREEDYDATA:recipient-address})?,(%{GREEDYDATA:recipient-status})?,(%{NUMBER:t
otal-bytes})?,(%{NUMBER:recipient-count})?,(%{GREEDYDATA:related-recipient-address})?,(%{GREEDYDATA:reference})?,(%{GREEDYDATA:message-subject})?,(%{GREEDYDATA:sender-address})?,(%{GREEDYDATA:return-path})?,(
%{GREEDYDATA:message-info})?,(%{WORD:directionality})?,(%{GREEDYDATA:tenant-id})?,(%{IPORHOST:original-client-ip})?,(%{IPORHOST:original-server-ip})?,(%{GREEDYDATA:custom-data})?" }

    }
    mutate {
        convert => [ "total-bytes", "integer" ]
        convert => [ "recipient-count", "integer" ]
        split => ["recipient-address", ";"]
        split => [ "source-context", ";" ]
        split => [ "custom_data", ";" ]
        }

}
if [type] == "logstash" {

        # Extract fields from the each of the detailed message types
        # The patterns provided below are included in core of LogStash 1.4.2.
        grok {
                match => [
                        "message", "%{CISCOFW106001}",
                        "message", "%{CISCOFW106006_106007_106010}",
                        "message", "%{CISCOFW106014}",
                        "message", "%{CISCOFW106015}",
                        "message", "%{CISCOFW106021}",
                        "message", "%{CISCOFW106023}",
                        "message", "%{CISCOFW106100}",
                        "message", "%{CISCOFW110002}",
                        "message", "%{CISCOFW302010}",
                        "message", "%{CISCOFW302013_302014_302015_302016}",
                        "message", "%{CISCOFW302020_302021}",
                        "message", "%{CISCOFW305011}",
                        "message", "%{CISCOFW313001_313004_313008}",
                        "message", "%{CISCOFW313005}",
                        "message", "%{CISCOFW402117}",
                        "message", "%{CISCOFW402119}",
                        "message", "%{CISCOFW419001}",
                        "message", "%{CISCOFW419002}",
                        "message", "%{CISCOFW500004}",
                        "message", "%{CISCOFW602303_602304}",
                        "message", "%{CISCOFW710001_710002_710003_710005_710006}",
                        "message", "%{CISCOFW713172}",
                        "message", "%{CISCOFW733100}"
                ]
        }

        # Parse the syslog severity and facility
        syslog_pri { }

# Do a DNS lookup for the sending host
# Otherwise host field will contain an
# IP address instead of a hostname
dns {
    reverse => [ "host" ]
    action => "replace"
  }

geoip {
      source => "src_ip"
      target => "geoip"
      database => "/opt/logstash/databases/GeoLiteCity.dat"
      add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
      add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
    }
    mutate {
      convert => [ "[geoip][coordinates]", "float"]
    }
    # do GeoIP lookup for the ASN/ISP information.
    geoip {
      database => "/opt/logstash/databases/GeoIPASNum.dat"
      source => "src_ip"
    }
}
if [path] =~ "VHOST" {
   mutate { replace => {"type" => "esxi_host"}}
}
}

output {
  if [type] == "logstash" {
      elasticsearch {
         hosts => ["127.0.0.1:9200"]
         index => "%{type}-%{+YYYY.MM.dd}"
         manage_template => false
         document_type => "logstash"
      }
  }

  if [type] == "ex_msg_trk" {
      elasticsearch {
         hosts => ["127.0.0.1:9200"]
         index => "logstash_exch-%{+YYYY.MM.dd}"
      }
  }

  if [type] == "esxi_host" {
      elasticsearch {
         hosts => ["127.0.0.1:9200"]
         index => "%{type}-%{+YYYY.MM.dd}"
      }

  } else {
      elasticsearch {
         hosts => ["localhost:9200"]
         sniffing => true
         manage_template => false
         index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
         document_type => "%{[@metadata][type]}"
      }
  }
}

It'll help if you can edit that and wrap the config in code formatting - </> button.

Done, thanks for the tip

Debug tells me this at the end of the 400 error:

"type"=>"parse_exception", "reason"=>"geo_point expected"

You shouldn't need that, it does it by default.

Removed indicated lines

Is it a mapping issue? not sure how to fax that...
{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse", "caused_by"=>{"type"=>"parse_exception", "reason"=>"geo_point expected"}}}}, :level=>:warn, :file=>"logstash/outputs/elasticsearch/common.rb", :line=>"119", :method=>"submit"}

Which index is this going into? Does it have a matching template?

Based on the output it goes to logstashit-* correct?
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "logstashit-%{+YYYY.MM.dd}"
manage_template => true
template_name => "logstash*"
# document_type => "logstash"
}

I am not sure how to create the associated template. I could send it to logstash-*

Unless you have a matching template it won't match the default logstash-* one. So you may want to copy the existing one and give it that pattern.

Also, that index naming pattern can be easily misread :stuck_out_tongue:

So if I send it to index logstash-*, it would match correct?

I have this now and getting the error below the code

output {
elasticsearch {
         hosts => ["127.0.0.1:9200"]
         index => "logstash-%{+YYYY.MM.dd}"
         manage_template => true
         template_name => "logstash*"
#         document_type => "logstash"
      }
}

Error Text:
"error"=>{"type"=>"illegal_argument_exception", "reason"=>"mapper [geoip.latitude] of different type, current_type [double], merged_type [float]"}}}, :level=>:warn, :file=>"logstash/outputs/elasticsearch/common.rb", :line=>"119", :method=>"submit"}

WooHooo! Its working.

Created a new index and template.

Thanks for leading me down the path. Was a good educational process

1 Like