I am having the same problem as listed at this link
I first attempted to update the logstash template as described, but that appeared to completely wipe out everything but the field for location and caused writes from logstash to elasticsearch to fail.
I deleted my entire elasticsearch data directory after being unable to recover it.
I then read more closely the last post, where the geoip.location field is set as a geo_point type by default in the default logstash template.
So, I updated logstash to send to logstash-netflow-date and logstash-syslog-date, and updated my geoip filters so that the location field would fall under a parent named geoip:
Okay. My takeaway from that thread is that there are differences between the default template that I'm using and the template that the person/people in the initial thread I linked.
But which differences matter?
Attempting this (with the obvious change of the index name), wiped the template again.
Is it supposed to be adding it to the existing template? Or should I be copying my existing template, editing this part in, and then posting it?
Sorry, I didn't save my edit. See the second link in my first post. That shows how to add a second template that contains the mapping you want whilst retaining the default template.
So I understand that if I perform a put action on a template, that template will be overwritten. Is that correct?
I also have the option of creating a second template and structuring my index names so that they match one or more templates, which will be merged for that index.
@jmoffitt, I don't want to deter you from trying to build this all yourself, however you might want to consider looking at ElastiFlow (https://github.com/robcowart/elastiflow). It handles the lookups you are trying get working, and A LOT more. It will at least give you examples of almost anything you would ever want to try to do with Logstash.
I believe I initially used that solution and it was great at pointing me in the right direction. However I recall that it used the netflow module rather than a netflow pipeline so it prevented me from using that instance of logstash for syslogs or anything else.
In any case this is as much an exercise in learning Logstash and Elasticsearch as anything else.
ElastiFlow definitely doesn't use the Netflow module. The Netflow module was based on v1.0.0 of ElastiFlow and is actually quite dated at this point. The current ElastiFlow release is 3.2.1.
Since you are in a learning phase, you might also want to take a look at...
And to see a good example of using grok patterns to handle a bunch of the weird variety of syslog headers that vendors can throw at you...
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.