How do i create field Types such as geo_point in a logstash conf file?

I have location data I have been testing with various formats related to geo_point in order for the machine to say "this is a geo_point" but it doesnt. I said to myself "logstash handles parsing and should take the burden off of elastic for type definitions of field values."

So i started looking deeper.

I was using mutate to convert some fields through the convert keyword but from what it looks like, it isnt taking too keenly to date or geo_point.

I was thinking that instead of using a mutate which only defines base types it seems, how do we handle the proper parsing of dates and geo_points in the configuration file.

date is a simple datetime string.
location, i have tried as a %{lat},${lon} string, an array of size 2. I have tried an alternative: location-map field which is a hash of { lat, lon } keys. This fails too as it assigns lat and long to text fields, but location-map is still not a geocoord.

Is this possible without sending puts to the elasticsearch instance? I have 200 servers right now, some create new indices on the day or under particular cases, so im not too keen on manually doing this. It just seems like there should be a simple solution all contained from within logstash which has the purpose of heavy lifting and parsing.

Im not sure why this is an issue. I am just trying to load up the map tool in Kibana, but my index doesnt show up and it seems like it is because there is no geo_data in the index for it to use. How do we implement this in Logstash?

Basically, you don't. logstash has no conception of a geo_point. You require an index template that tells elasticsearch to treat a pair of numbers as a latitude and longitude. The default template for logstash sets [geoip][location] to be a geo_point.

Since logstash is a stager, it will accept data, augment and prep it and ship it to elasticsearch, elastic shouldnt have to format or handle anything additional as it just verifies and adds documents. I would think that logstash SHOULD know all the datatypes of elasatic and be able to pass metadata about types accordingly.

So, how does dynamically creating indices work? I have daily indices being created, but with your logic, i would have to have a cron job to every day on a worker machine, roll over all new indices and create this template? Seems like a lot for something that should be handled?

FOR EXAMPLE: It looks when you define a output as elasticsearch, there is a manage_template option you can turn on, as well as a template and template_name property you can assign. This gives me some insight as a consumer that you can define a TEMPLATE which will be passed into Elasaticsearch.

That way you dont have elastic definition incorrect datatypes, strings where they should be ints or floats etc. You can, during the logstash initial push of the creating an index feed into elastic this metadata.

I havent really looked into it personally and are trying to find better documentation on it, but it seems there is in fact a way, though im unsure of the format or where the default file should live etc.

A template includes a set of regexps that determine which indexes it is applied to. For example, the default template

"index_patterns" : "logstash-*",

applies to all index names that start with logstash-

elasticsearch does not provide a way to tell it the data type when indexing a document, although the default field parsers often guess types correctly. It has to be done through an index template.

Under that knowledge, I noticed that when I am doing index management from within Kibana it seems to be readonly. is there a mechanism in place to handle this change from within Kibana? If not, that is ok.

I will just have to create a set of configuration scripts which will create a template for the field location and ensure the pattern for matchings indices to cover the various dynamic indices ( as there is a formatting I can generally follow )

This has been resolved. What ended up happening is that i saved an index template, but it seemed that the index was not approperiately implementing the template. I ended up doing a refresh on the index and it picked up the index template. Took me a long time to get this working because when the index was created, it was suppose to pick up the template but wasnt. I guess when the template was saved it wasnt immediately applied.