Problem with logstash DNS resolver plugin

So i'm trying out ELK to see if it can compare to an existing product we're using internally and i've hit a problem i can't get around.

I have a bunch of IP addresses that i've gotten from a csv file and stripped out everything until its just about 50 or so ip addresses, 1 per line in an empty .txt file.

I want logstash to process this and reverse lookup the ip address and get an FDQN if possible. Then a filter should take the ip and geo lookup on it so i can make a nice fancy dashboard out of it.
Finally it should be pushing it into the index i specify.

problem is.. its not doing any of this

my .conf is:

    input 	{
	file {
	mode=>"tail"
	path=> "/opt/Data/fw_syslog_derived.txt"
   	id => "ip_from_ext_firewall"
	}
}
filter	 {
	dns {
        reverse =>["source_ip"]	
	    resolve =>["field_with_fdqn"] 	
        action => "replace" 
} 
geoip{

	source => "Source_Ip"
	target => "Source_Ip_geo"
	add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
	add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
        }
}
output	{ 		
	elasticsearch {  
	hosts =>["http://localhost:9200"]		
	index =>"ipsfromextfirewall-%{+YYYY.MM.dd}"  	
	}
}

and my logstash log output is :slight_smile:

[2019-09-06T16:40:51,985][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.3.0"}

[2019-09-06T16:40:54,121][INFO ][org.reflections.Reflections] Reflections took 61 ms to scan 1 urls, producing 19 keys and 39 values 

[2019-09-06T16:40:55,502][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}

[2019-09-06T16:40:55,757][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}

[2019-09-06T16:40:55,822][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}

[2019-09-06T16:40:55,826][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}

[2019-09-06T16:40:55,865][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200"]}

[2019-09-06T16:40:55,901][INFO ][logstash.filters.geoip   ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.0-java/vendor/GeoLite2-City.mmdb"}
[2019-09-06T16:40:55,992][INFO ][logstash.outputs.elasticsearch] Using default mapping template

[2019-09-06T16:40:56,088][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.

[2019-09-06T16:40:56,092][INFO ][logstash.javapipeline    ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, :thread=>"#<Thread:0x33edcb9b run>"}

[2019-09-06T16:40:56,202][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}

[2019-09-06T16:40:56,813][INFO ][logstash.inputs.file     ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/var/lib/logstash/plugins/inputs/file/.sincedb_cfa455b0826a830f3142e5198003c38c", :path=>["/opt/Data/fw_syslog_derived.txt"]}

[2019-09-06T16:40:56,878][INFO ][logstash.javapipeline    ] Pipeline started {"pipeline.id"=>"main"}

[2019-09-06T16:40:57,052][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}

[2019-09-06T16:40:57,093][INFO ][filewatch.observingtail  ] START, creating Discoverer, Watch with file and sincedb collections
[2019-09-06T16:40:57,934][INFO ][logstash.agent] Successfully started Logstash API endpoint {:port=>9600}

so when i curl -XGET 'localhost:9200/_cat/indices?v' i only ever see three indices... two created by the system and a filebeat-7.3.0

so far i have only ever had a ipsfromextfirewall- index created once earlier today, but we cannot get it to create. the only thing we now see in the kibana discover page is just basic logging from the system.

I've been at this all day and im now at the end and turning to the community to see if anyone can see what im doing wrong here.... i know i must be but im too close to the problem to see it.

Oh and its running on an Ubuntu 18.4 server, plenty of juice in the tank.

Thanks

What does a line of your input file look like?

just a list of ip addresses like this

194.45.45.45
195.45.45.45
196.45.45.45

these are just made up ones but the source is gathered as plain text derived from a syslog file for testing purposes before i let it use a full syslog file and filter out what i need.

Nothing creates either the source_ip or Source_IP fields, so both of these should be replaced by [message]

so what your saying for example that reverse =>["source_ip"] needs to be reverse =>["message "] ??

Im a little confused then as https://www.elastic.co/guide/en/logstash/current/plugins-filters-dns.html dosn't mention using this... unless im reading this the wrong way ?

Yes.

ok, tried that, still not getting anything....
my log output is thus:

[2019-09-09T13:43:47,163][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.3.0"}
[2019-09-09T13:43:48,836][INFO ][org.reflections.Reflections] Reflections took 49 ms to scan 1 urls, producing 19 keys and 39 values 
[2019-09-09T13:43:49,884][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2019-09-09T13:43:50,131][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2019-09-09T13:43:50,176][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
[2019-09-09T13:43:50,179][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-09-09T13:43:50,207][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2019-09-09T13:43:50,430][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[2019-09-09T13:43:50,437][INFO ][logstash.javapipeline    ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, :thread=>"#<Thread:0x547b8bb6 run>"}
[2019-09-09T13:43:50,958][INFO ][logstash.javapipeline    ] Pipeline started {"pipeline.id"=>"main"}
[2019-09-09T13:43:51,056][INFO ][filewatch.observingread  ] START, creating Discoverer, Watch with file and sincedb collections
[2019-09-09T13:43:51,067][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-09-09T13:43:51,656][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

it looks like its trying to do something, it looks like its started but im not seeing anything in kibana, no new indies created, nothing.

The file input will be waiting for lines to be appended to the file.

but i set the sincedb to null and tell the input to start at the beginning, i just ran it again from cli logstash -f pathtoconf/ip_dns.conf and it did exactly the same....

the pipeline has started, its got to workers assigned to it, just its not able to read the file.

im going to add to the file and see if that does anything.. maybe its waiting for extra input

ok, so i removed the file containing the ip addresses, recreated it and put the ips back in there... now i see its doing something and throwing errors about not resolving and then it removes the data file !!! ok.. not a baddy i can replace it in there... as im not using sincedb its kinda expected.

ok, got it working... so it was ending up with it being something to do with the file that had the IP addresses in it.

but now its only reading one line adding it to the index and thats it.

i set the start position at the start, it reads first line, i set it at the end, it reads the last line. but i can't get it to read all lines, which is what i want it to do.

maybe wrong input ? its set for file, which i assumed would be the correct one.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.