Can someone help me with the configuration of a logstash 5 conf file and/or kibana 5?
I'm running logstash, elasticsearch and kibana locally on a mac 10.10.x.
The topic is about the combination of loading csv files in combination with geo locations
(latitude, longitude).
UseCase
import of a given csv file using logstash
the csv is structured lines/columns and includes latitude/longitude geo location
running the below pasted conf-file
exploring the geo-data in a kibana 5 tile-map
Results
data are piping into elasticsearch and showing up in kibana
the lat/lon are shown as GeoHASH in the tile-map configuration section
in the subsequent dropdown a geoip.longitude is appearing
once picking up the geoip kibana prompts "no results found"
Question
could you please double-check my conf file example
do I need to re-config my logstash load file
do I need to configure/set in kibana to get my "location" work
I'm afraid my data conversion is not very properly. I'm trying to figure out that particular item.
Never the less I came across another issue. As for configuring the *.config file I'm dealing with
an input csv of about 100 rows.
Now when scaling the file to the full size about 500.000 rows the following happens:
logstash is processing until "Successfully started Logstash API endpoint {:port=>9600}"
then nothing else happens
a few lines above logstash prompts " Starting pipeline {"id"=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>1000}"
if I exclude the latitude, longitude columns then the file is processed (or sometimes not)
Never had that kind of experience with stack #4
Also matching data seems to be tricky. Just want to index the "voucher_date" (YYYY-MM-DD in the source). So far it doesn't work with
date {
match => [ "voucher_date", "YYYY MM dd" ]
}
Have you or anyone an idea how to design a *config unsing csv, geo(latitude, longitude) voucher_date string to date and having more than 100 rows in the source-file?
Thanks for keeping up with the above. It took a little time to figure it out.
Finally I converted the *csv from UTF8 "Legacy Mac OS (CR)" into "Unix (LF)" it started working.
Yep, GH = GitHub, isn’t it? May need a little time, as I’m a bit busy at the moment.
Now I’m struggling again with the GeoData. They don’t appearing as geo_point/geohash in kibana.
Referring to the import issue I’m afraid I’m somehow lost. I want to review and adjust my complete
workflow end-2-end.
Is there a paper/blog/website available which provides an “ES 5 stack csv-geo” breakdown in principal
or from bird’s eye perspective? Based on an example or something like this? I’m looking for a kind
of an recipe or a handy checklist?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.