Logstash mapping issue

Hello,

I hope i'm in the right category. I am using pf sense + ElasticSearch 2.2.0, Logstash 2.2.1, Kibana 4.4.1New to the site and new to ELK. I am having an a couple issues and I am not sure if this is a problem or not, but when I do:
curl -XGET localhost:9200/logstash-*/_mapping/?pretty
I get the following, which is my mapping twice:

Link to output

Again I don't know if it's an issue, or how to fix it. The other issue I am having is I don't get any geoip fields in my kibana discover area, but see it in the settings indicies. I don't know if they have to do with one another so I posted it here as well. Thanks for any insight to my issue.

You're not seeing the same mappings twice; there are two different indexes listed in the mappings output (logstash-2016.03.16 and logstash-2016.03.15)

Ok, is there a way to delete one of them?

You are having a problem with index mapping. What you can do is to update the logstash default mapping template by running

PUT _template/logstash
{
  "template": "logstash-*",
  "mappings": {
    "syslog": {
      "dynamic_templates": [
        {
          "message_field": {
            "mapping": {
              "index": "analyzed",
              "omit_norms": true,
              "type": "string"
            },
            "match_mapping_type": "string",
            "match": "message"
          }
        },
        {
          "string_fields": {
            "mapping": {
              "index": "not_analyzed",
              "omit_norms": true,
              "type": "string"
            },
            "match_mapping_type": "string",
            "match": "*"
          }
        }
      ],
      "_all": {
        "omit_norms": true,
        "enabled": false
      },
      "properties": {
        "geoip": {
          "dynamic": true,
          "type": "object",
          "properties": {
            "city_name": {
              "index": "not_analyzed",
              "type": "string"
            },
            "timezone": {
              "index": "not_analyzed",
              "type": "string"
            },
            "country_code2": {
              "index": "not_analyzed",
              "type": "string"
            },
            "country_name": {
              "index": "not_analyzed",
              "type": "string"
            },
            "continent_code": {
              "index": "not_analyzed",
              "type": "string"
            },
            "location": {
              "type": "geo_point",
              "doc_values": true
            },
            "region_name": {
              "index": "not_analyzed",
              "type": "string"
            },
            "real_region_name": {
              "index": "not_analyzed",
              "type": "string"
            },
            "postal_code": {
              "index": "not_analyzed",
              "type": "string"
            }
          }
        }
      }
    }
  }
}

Install Elasticsearch-kopf on any of your ES nodes to manage indexing template more easily. Either Kopf or Sense on Kibana will help you.

The geoip.location must be geo_point data type, and you should not have all the fields analyzed unless necessary.

Thanks, I'll have to look that over to get a feel for what's going on. This is what I currently have is from this site.

This is what it looks like, which isn't right but trying to work thru it.

March 16th 2016, 00:16:58.000 message:58,16777216,,1000002620,em1,match,block,in,4,0x0,,1,7124,0,none,17,udp,201,192.168.1.141,239.255.255.250,52480,1900,181 @version:1 @timestamp:March 16th 2016, 00:16:58.000 type:syslog host:192.168.1.167 tags:PFSense, firewall evtid:134 prog:filterlog rule:58 sub_rule:16777216 tracker:1000002620 iface:em1 reason:match action:block direction:in ip_ver:4 tos:0x0 ttl:1 id:7124 offset:0 flags:none proto_id:17 proto:udp length:201 src_ip:192.168.1.141 dest_ip:239.255.255.250 src_port:52480 dest_port:1900 data_length:181 _id:AVN-tr3Pws0CvuU37h4w _type:syslog _index:logstash-2016.03.16 _score:

Ok, I got further, but I am getting my information from pfsense and snort. It isn't breaking up the tcp string and that why I'm not getting separate IP's. Here is what it looks like: message:[119:2:1] (http_inspect) DOUBLE DECODING ATTACK [Classification: Not Suspicious Traffic] [Priority: 3] {TCP} 192.168.1.166:57811 -> 63.251.98.12:80 @version:1 @timestamp:March 17th 2016, 13:22:52.000 type:syslog host:192.168.1.167 tags:PFSense evtid:33 prog:snort[79032] _id:AVOFmfae16oo5OzDbW9d _type:syslog _index:logstash-2016.03.17 _score:

What I can't figure out is how to separate the bold part. Any help? Thanks again!

Are you using grok to parse the message?

I am. It is as follows: Grok File

Any ideas?

I'm not familiar with grok, but I think you can use regex to extract certain data you need in your message.