Some logs not being ingested from LS to ES

Some logs from the same LS instance are not being imported into ES. I see the following notification in ES when sending just the problem logs to ES; "the indices which match this index pattern don't contain any time fields". LS appears to be parsing the logs just fine. All logs are single line JSON formatted. It looks like the "@timestamp" filter is working for all the logs except the problem ones even though when I look at the LS debug output all looks good for all logs.

My date filter:

date {
match => ["timestamp", "yyyy-MM-dd HH:mm:ss"]
timezone => "UTC"
target => "@timestamp"
remove_field => ["timestamp"]
}

The "timestamp" field before LS:

"timestamp": "2017-09-11 21:52:45"

The "@timestamp" field after the date filter is applied:

"@timestamp"=>2017-09-11T21:52:45.000Z

Not sure what I'm screwing up or if this really is the source of the problem.

Raw JSON log:
{"tcp_flags": 16, "icmp_code": null, "tcp_win": 1024, "tcp_ack": 2915778850, "tcp_urp": 0, "icmp_type": null, "ip_len": 52, "src_ip": "10.10.10.10", "src_port": 39686, "ip_hlen": 5, "ip_off": 0, "tcp_csum": 43620, "ip_ttl": 64, "sid": 4, "ip_tos": 0, "ip_csum": 32849, "tcp_res": 0, "status": 0, "t_url": "https://192.168.1.3/alert.php?&testing=5", "timestamp": "2017-09-11 21:52:45", "ip_ver": 4, "ip_flags": 2, "tcp_off": 8, "ip_id": 57472, "cid": 5493, "DataType": "ids", "tcp_seq": 1593577001, "dst_port": 443, "signature": "ALLERT Raised on traffic", "ip_proto": 6, "dst_ip": "2.3.4.5", "CustID": "Test"}

LS processed log:
{"event"=>{"icmp_type"=>nil, "ip_proto"=>6, "ip_tos"=>0, "signature"=>"ALLERT Raised on traffic", "tcp_off"=>8, "ip_id"=>57472, "tcp_urp"=>0, "dst_ip"=>"2.3.4.5", "sid"=>4, "ip_ver"=>4, "src_ip"=>"10.10.10.10", "ip_ttl"=>64, "ip_off"=>0, "tcp_flags"=>16, "ip_csum"=>32849, "tcp_seq"=>1593577001, "@version"=>"1", "host"=>"test", "DataType"=>"ids", "src_geoip"=>{}, "icmp_code"=>nil, "tcp_res"=>0, "tcp_win"=>1024, "ip_flags"=>2, "src_port"=>39686, "tcp_ack"=>2915778850, "@timestamp"=>2017-09-11T21:52:45.000Z, "CustID"=>"Test", "dst_port"=>443, "t_url"=>"https://192.168.1.3/alert.php?&testing=5", "ip_hlen"=>5, "ip_len"=>52, "tcp_csum"=>43620, "cid"=>5493}}

What does the mapping for the index look like?

I've removed my template index mappings for "logstash-*" and tried just using the default ES dynamic mappings and I see the same behavior with or without my mappings.

Unfortunately my template mapping is huge.

Here's the "@timestamp" portion.

"properties" : {
    "@timestamp": { "type": "date" },

What version of the stack are you on?

ES 5.5.2

Testing Elastic's cloud offering.

Just upgraded to 5.6 and seeing the same issue.

What about the actual mapping though, not the template.

I am seeing a similar issue. I've narrowed it down to Logstash json filter. I've removed all other filters and processing on the incoming messages and wrote output to a log file instead of Elasticsearch. I see a variable number of messages for the same 100 messages I post to Logstash in my output file. I have another thread describing the same issue. Perhaps try writing a sample output of few messages to a log file... that way you can narrow it down too.

I've also noticed that X-pack shows 100 ingested events and 100 emitted events on Kibana. But when Logstash persists to a file, few are missing... a random number every single time. I believe some thread bug but hope not as we are blocked on moving to Production because of this issue.

So in my case I've narrowed it down to the "geoip" filters. If I remove these filters in my LS pipeline my logs come in fine. I have two filters:

geoip {
    source => "src_ip"
    target => "src_geoip"
}

geoip {
    source => "dst_ip"
    target => "dst_geoip"
}

Not sure why they are causing this issue as yet.

What is odd is that the logs that work have those filters applied to them as well and I get the GeoIP data in the log in ES. But removing those two filters fixes the problem for me so far.

Here's the actual mappings for src_geoip and dst_geoip:

      "dst_geoip": {
        "properties": {
          "city_name": {
            "type": "keyword"
          },
          "continent_code": {
            "type": "keyword"
          },
          "country_code2": {
            "type": "keyword"
          },
          "country_code3": {
            "type": "keyword"
          },
          "country_name": {
            "type": "keyword"
          },
          "ip": {
            "type": "ip"
          },
          "latitude": {
            "type": "half_float"
          },
          "location": {
            "type": "geo_point"
          },
          "longitude": {
            "type": "half_float"
          },
          "postal_code": {
            "type": "keyword"
          },
          "region_code": {
            "type": "short"
          },
          "region_name": {
            "type": "keyword"
          },
          "timezone": {
            "type": "keyword"
          }
        }


      "src_geoip": {
        "properties": {
          "city_name": {
            "type": "keyword"
          },
          "continent_code": {
            "type": "keyword"
          },
          "country_code2": {
            "type": "keyword"
          },
          "country_code3": {
            "type": "keyword"
          },
          "country_name": {
            "type": "keyword"
          },
          "ip": {
            "type": "ip"
          },
          "latitude": {
            "type": "half_float"
          },
          "location": {
            "type": "geo_point"
          },
          "longitude": {
            "type": "half_float"
          },
          "postal_code": {
            "type": "keyword"
          },
          "region_code": {
            "type": "short"
          },
          "region_name": {
            "type": "keyword"
          },
          "timezone": {
            "type": "keyword"
          }
        }

Yep, its official, I'm blind. I had region_code defined as a short type. Don't remember why I did that, but changed that to keyword type and all is good. My apologies.

All good, thanks for clarifying the root problem!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.