I am attempting to setup geoip on Logstash 7.3.0. I have seen an old post on how to set it up, but I want to make sure I'm doing it right. In the logstash config file I add the below code. Is this correct?
The default template for an elasticsearch output defines the location field of an object called geoip to be a geo_point. That matches the default target for the geoip filter. If you use a different target then you need a different template.
When i try to implement that into the Logstash config file I get a Signt error and the main pipleline stops. I can't seem to find how to do this on 7.3.0. What am i doing wrong?
if source.ip is a nested field you should address is as [source][ip]
you should change your if structure. If the geoip filter tries to resolve a local address, it fails. In this case none of the [geo/geoip][...] fields will exists
Thank you for all your help. Does this look right? I'm new to this and learning as I go. I figure a lot out by trial and error (inference on error). Another dumb question. How do I know if it is nested? I'm using packetbeat for the IP and one of the categories is source.ip.
Replace does not create fields. But the fields do not exists until created. The geoip Filter creates this fields if successful. And you can use mutate add_field to create them.
How would I go about creating them first time around? Would I use create instead of replace? Once they are created I can change the create to replace? Not sure how to address that issue.
Well, You misunderstand elasticsearch.
You do not create fields for the elasticsearch „table“. You have to create fields in each document. You can store very small documents containing only a few fields in the same elasticsearch index together with documents containing hundreds of fields. But if a field is not inside the document you have to create it ( e.g. Mutate add_field ) before you can use it.
I'm trying to understand elasticsearch a little bit every day. Is there a way to see which fields are already created? Is there a better way to create the fields before I implement the Logstash filter?
elasticsearch stores documents.
Every event in Logstash is one document.
A document may have several fields - elasticsearch does not need to know which fields. You can create any field inside a logstash event - elasticsearch will store it. If every event should have a field called „test_data“ you have to create it in every event seperately. You cannot create it „one time for all documents“ - not in logstash - not in elasticsearch.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.