So i'm trying out ELK to see if it can compare to an existing product we're using internally and i've hit a problem i can't get around.
I have a bunch of IP addresses that i've gotten from a csv file and stripped out everything until its just about 50 or so ip addresses, 1 per line in an empty .txt file.
I want logstash to process this and reverse lookup the ip address and get an FDQN if possible. Then a filter should take the ip and geo lookup on it so i can make a nice fancy dashboard out of it.
Finally it should be pushing it into the index i specify.
problem is.. its not doing any of this
my .conf is:
input {
file {
mode=>"tail"
path=> "/opt/Data/fw_syslog_derived.txt"
id => "ip_from_ext_firewall"
}
}
filter {
dns {
reverse =>["source_ip"]
resolve =>["field_with_fdqn"]
action => "replace"
}
geoip{
source => "Source_Ip"
target => "Source_Ip_geo"
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
}
}
output {
elasticsearch {
hosts =>["http://localhost:9200"]
index =>"ipsfromextfirewall-%{+YYYY.MM.dd}"
}
}
and my logstash log output is
[2019-09-06T16:40:51,985][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.3.0"}
[2019-09-06T16:40:54,121][INFO ][org.reflections.Reflections] Reflections took 61 ms to scan 1 urls, producing 19 keys and 39 values
[2019-09-06T16:40:55,502][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2019-09-06T16:40:55,757][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2019-09-06T16:40:55,822][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
[2019-09-06T16:40:55,826][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-09-06T16:40:55,865][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200"]}
[2019-09-06T16:40:55,901][INFO ][logstash.filters.geoip ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.0-java/vendor/GeoLite2-City.mmdb"}
[2019-09-06T16:40:55,992][INFO ][logstash.outputs.elasticsearch] Using default mapping template
[2019-09-06T16:40:56,088][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[2019-09-06T16:40:56,092][INFO ][logstash.javapipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, :thread=>"#<Thread:0x33edcb9b run>"}
[2019-09-06T16:40:56,202][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2019-09-06T16:40:56,813][INFO ][logstash.inputs.file ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/var/lib/logstash/plugins/inputs/file/.sincedb_cfa455b0826a830f3142e5198003c38c", :path=>["/opt/Data/fw_syslog_derived.txt"]}
[2019-09-06T16:40:56,878][INFO ][logstash.javapipeline ] Pipeline started {"pipeline.id"=>"main"}
[2019-09-06T16:40:57,052][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-09-06T16:40:57,093][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
[2019-09-06T16:40:57,934][INFO ][logstash.agent] Successfully started Logstash API endpoint {:port=>9600}
so when i curl -XGET 'localhost:9200/_cat/indices?v' i only ever see three indices... two created by the system and a filebeat-7.3.0
so far i have only ever had a ipsfromextfirewall- index created once earlier today, but we cannot get it to create. the only thing we now see in the kibana discover page is just basic logging from the system.
I've been at this all day and im now at the end and turning to the community to see if anyone can see what im doing wrong here.... i know i must be but im too close to the problem to see it.
Oh and its running on an Ubuntu 18.4 server, plenty of juice in the tank.
Thanks