Geo_point expected issue

Hi all,

Hitting my head against a wall with a geo_point issue, I am receiving an error such as the following from Logstash:

[2017-05-03T14:43:22,014][WARN ][logstash.outputs.elasticsearch] Failed action. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"ossec-2017.05.03", :_type=>"ossec", :_routing=>nil}, 2017-05-03T13:43:18.538Z x-1 %{message}], :response=>{"index"=>{"_index"=>"ossec-2017.05.03", "_type"=>"ossec", "_id"=>"AVvOjQf_pg9iTpVc1m9v", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse", "caused_by"=>{"type"=>"parse_exception", "reason"=>"geo_point expected"}}}}}

The software versions I am running at the moment are:

logstash-5.3.2-1 (following upgrade)
elasticsearch-5.3.2-1 (following upgrade)

An example document being sent to Elasticsearch is the following:

    {
              "srcip" => "x.x.x.x",
             "offset" => 14808041,
              "count" => 1,
         "input_type" => "log",
               "rule" => {
             "firedtimes" => 538,
                "PCI_DSS" => [
                [0] "6.5",
                [1] "11.4"
            ],
                 "groups" => [
                [0] "web",
                [1] "accesslog",
                [2] "attack"
            ],
            "description" => "Web server 400 error code.",
             "AlertLevel" => 5,
                  "sidid" => 31101
        },
            "decoder" => {
            "name" => "web-accesslog"
        },
             "source" => "/var/ossec/logs/alerts/alerts.json",
               "type" => "ossec",
                "url" => "/rss/catalog/notifystock/",
           "full_log" => "xxxxx",
               "tags" => [
            [0] "ossec",
            [1] "xxx",
            [2] "beats_input_codec_json_applied"
        ],
         "@timestamp" => 2017-05-03T13:31:29.000Z,
            "AgentIP" => "x.x.x.x",
           "@version" => "1",
               "beat" => {
            "hostname" => "x-x-01",
                "name" => "x-x-01"
        },
               "host" => "x-x-01",
           "location" => "/var/log/nginx/access.log",
            "AgentID" => "014",
                 "id" => "401",
        "GeoLocation" => {
                  "timezone" => "Europe/Paris",
                        "ip" => "x.x.x.x",
                  "latitude" => 48.9394,
               "coordinates" => [
                [0] 2.2367,
                [1] 48.9394
            ],
            "continent_code" => "EU",
                 "city_name" => "Argenteuil",
             "country_code2" => "FR",
              "country_name" => "France",
             "country_code3" => "FR",
               "region_name" => "Val d'Oise",
               "postal_code" => "95100",
                 "longitude" => 2.2367,
               "region_code" => "95"
        },
          "AgentName" => "x1",
             "fields" => nil
    }

And the relevant segment of mapping for the destination index looks as follows (can produce the full template / mapping if needed for diagnosis):

  "GeoLocation": {
    "properties": {
      "area_code": {
        "type": "long"
      },
      "city_name": {
        "type": "keyword"
      },
      "continent_code": {
        "type": "text"
      },
      "coordinates": {
        "type": "geo_point"
      },

A few things I have already done to try and rectify this are:

  1. Re-index the data with a new field name, removing a conflict on the original name at the same time.

  2. Removed any additional test / unneeded templates which -may- have overlapped with the template for this index

  3. exampled rubydebug / raw JSON document data for the data being shipped.

  4. Manually input the document via the ES API which produces the same error, however, in a test template does not.

This is some of the geoip configuration within Logstash:

  if "" in [srcip] {
    geoip {
      source => "srcip"
      target => "GeoLocation"
      database => "/etc/logstash2/GeoLite2-City.mmdb"
      tag_on_failure => [""]
   }
  }

And I also have the following:

  rename => [ "[GeoLocation][location]", "[GeoLocation][coordinates]" ]

Any advice on this would be really helpful, as I'm unsure of where to look next for a resolution,

Also, if I've missed any config or data required to help, let me know and I'll try and provide

Cheers.

David

Is that the actual output? It doesn't look valid with those [0] and [1] values in the array?

Please upgrade ES, there was a bug with GC in that version that was fixed in later releases.

Hi Mark,

No, this is a copy of the rubydebug log, so the [0] / [1] are just their positions in the array, and are not included in the document.

I will schedule an upgrade of the cluster,

Thanks,

David

Ahh yes, good point.

bringing up the post a bit :slight_smile:

Hi Mark,

I've now gone ahead and upgraded the Elasticsearch cluster to the latest stable release,

elasticsearch-5.3.2-1

Across the board, unfortunately I am still receiving errors such as the following:

[2017-05-04T09:35:09,057][WARN ][logstash.outputs.elasticsearch] Failed action. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"ossec-2017.05.04", :_type=>"ossec", :_routing=>nil}, 2017-05-04T08:35:06.037Z X-1 %{message}], :response=>{"index"=>{"_index"=>"ossec-2017.05.04", "_type"=>"ossec", "_id"=>"AVvSmTfgfbn35aQWuyek", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse", "caused_by"=>{"type"=>"parse_exception", "reason"=>"geo_point expected"}}}}}

bringing up post.

I have resolved this issue, appears there was an issue with the template here,

{
  "ossec-*": {
    "order": 0,
    "template": "ossec-*",
    "settings": {
      "index": {
        "refresh_interval": "5s"
      }
    },
    "mappings": {
      "wazuh": {

Where the mapping was defined as "wazuh" however, the document was being sent to a type named "ossec".

I've tested using the "default" type which has now fixed my issue. :smiley:

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.