Im mining apache2 logs using filebeat. Data is being put into indices 'apache2-' via logstash.
If I look at the data using the 'discover' function I can see 'geoip.location.lat/lon' and 'location.lat/lon' but attempts to use this data in kibana are fruitless. Creating a map visualization gives me the error message "No Compatible Fields: The "apache2*" index pattern does not contain any of the following field types: geo_point".
What do I need to do with my data to use it this way? Have I made a configuration error along the line somewhere?
That mapping is incorrect. location would be a property of geoip. Not only that, but as @warkolm said...that needs to be collapsed into a geo_point type.
if you're using the geoip filter in logstash, this will work. What may be the easiest thing to do is just copy the logstash index template and create a new template that matches your index pattern. i.e.
Using the Dev console
GET _template/logstash
copy the json output. Edit the index_patterns setting to be something like "apache2-*" to match your new index, then
PUT _template/logstash-apache
paste the copied json with new index pattern directly below the PUT and run it.
Now when a new apache2-* index is created, it should use this mapping and be ready to go if you're using the geoip filter in Logstash
Ok! I followed your (great) instructions and created a new template. (BTW --> I looked high and low in the online documentation and didn't find this information.. hint). Once it was created I took the further step of deleting the existing apache2 logs from elastic and kibana.
A new log entry for apache2 was promptly created but it doesn't seem to have the appropriate data type:
What is that json from? The index mapping or the template? If its the mapping maybe you have multiple templates and one is incorrect? Or your original logstash template you copied is not correct? The geoip.location field in your original logstash template should look like what I pasted unless you had previously modified it. Double check the templates.
Are you using the Apache module in Filebeat or just pointing Filebeat to the Apache logs and then parsing with Logstash? Unless there is additional event processing that is only available in Logstash, its easier to just use the Filebeat Apache module in Filebeat. The work is already all done for you via the pre-built ingest pipelines. Did you double check the template to make sure its correct?
Interesting...well, the Apache module in Filebeat should already be taking care of the geoip information for you...it's built into the ingest pipleline. You need to make sure the ingest-geoip and ingest-user-agent plugins are installed on your Elasticsearch instances. Just use the default index names at first to see if you can get the data in correctly. It's pointless to send the data to a Logstash instance unless you need additional event processing that's only available via Logstash. The ingest node in Elastic has quite a few processors already and if using one of the built-in modules, all of the work is done for you already! Set your Filebeat output to poing to an Elastic host instead, using the default index names and lets see if it works.
I updated the logstash config to include '"template_name" => "apache2"' in the output section. Looking at the data confirms that Im getting geoip points now.
Thank you for sticking with me on this. Your help was invaluable.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.