and the raw log is "- - - [11/Jan/2017:16:30:41 +0530] "GET /report_generator_hartron/index.php HTTP/1.1" 200 596 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/55.0.2883.87 Chrome/55.0.2883.87 Safari/537.36""
1 more thing I want to tell that when I added this geoip field into logstash filter then under tag section one error is coming i.e. "geoip_lookup_failure"
Perhaps you can use the translate filter to map your 10.0.0.0/8 addresses to geographic locations? If you don't want to list all possible addresses then the cidr filter should be helpful. A combination of the two might be the best option.
You obviously need to obtain and maintain the mapping between your 10.0.0.0/8 addresses and geolocations. Nobody can help you with that.
I don't know what's up in your case. You're not getting a _geoip_lookup_failure tag so it looks like it's succeeding. Also, it works fine for me with Logstash 2.4.0:
Then there's something else in your configuration that removes the geoip field. Start by commenting out your elasticsearch output and use a simple stdout { codec => rubydebug } output. Does that make a difference?
Clearly your Logstash is capable of looking up the IP address in question so it shouldn't be hard to narrow down the problem.
Is it possible to create own file of internal ip addresses that will use at the time of GeoIP field ??
The geoip filter supports custom GeoIP databases, so I suppose you should be able to create your own such database with your internal addresses. I don't know the details.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.