Using custom target for geoip filter, how to update Elasticsearch template?

I am enriching my firewall logs with GeoIP information for the source and destination IPs. Because of that, I have to use custom target field names: src_geoip and dst_geoip.

This is the relevant part for my filter:

      geoip { source => "source" target => "src_geoip" }
      geoip { source => "destination" target => "dst_geoip" }

But in my index, these fields aren't the correct type for Kibana to display on a map using the lat/long.

The error message is:

No Compatible Fields: The "myindex-*" index pattern does not contain any of the following field types: geo_point

How do I update the index so that it's the right type? The index is named myindex-+{YYYY.MM.dd} so it creates a new index daily. Do I have to update every index every day? Is there a way to set this for all future indices instead?

You need to create an index template that applies for that index pattern and contains the correct mappings for those fields.

Thank you. I have done it by creating the template for myindex-, then deleting the existing myindex- indices. Now it creates an empty index for myindex-<today's date> I'm guessing from logstash but the index never fills. It's always sitting at 0 documents when I know events are being sent. I can't see any error message in the logstash or ES consoles. Why isn't it filling anymore?

Look in the Elasticseasrch and Logstash logs, preferably with debug mode enabled. If you have used a type in your index template that is different from what Logstash is sending, all indexing will fail as having 2 different types in an index is no longer allowed in Elasticsearch 6.x. Any type of mapping conflict could also cause indexing to fail.

Would that be in logstash's logs on ES's? I have turned on debug mode for logstash and I don't see any errors.

I don't know how to turn it on for ES, I don't know which logging facility there are or which one I want to look at.

It should be in the Logstash logs.

I don't see anything suspicious in there. Only that it does see the events coming in and processes them. No failures there. The document count in that index is still sitting at 0 though despite logstash receiving several per second.

--edit: I found the issue by looking into the dead_letter_queue logs. There was a mismatch for the type in some fields, i.e. I was trying to put a value larger than a short into a field with type short. I changed the myindex-* mapping and deleted the index. It's now filling in with documents. Thank you for your help!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.