Hi
Firstly i am apologise for bugging the forum but it is just that i really want this to work. I have setup the stack using this link https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elk-stack-on-ubuntu-16-04
and it is work perfectly. Also they have another tutorial for Apache and Nginx geoip filter . But in my case i am clooecting logs from my dns instead of the web server.
I already have the client ip , server ip , etc etc on my kibana but what i am still strugglling with is to convert that ip address to counttry code etc etc .. i do really appreciate if anyone has some kind if working exaample . I hopwe this make sense and really appreciate
Please show an example event. Have you tried using the geoip filter?
No Magnus ....
Here is what i have on my filter
geoip {
source => "ip"
target => "geoip"
database => "/etc/logstash/GeoLiteCity.dat"
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
}
mutate {
convert => [ "[geoip][coordinates]", "float"]
}
but nothing happen .. Logstash operate normally but coordinate no shown ... i am not sure this is correct or not.
No Compatible Fields: The "packetbeat-*" index pattern does not contain any of the following field types: geo_point
this is the error when try to create tile map
but nothing happen .. Logstash operate normally but coordinate no shown ... i am not sure this is correct or not.
Again, please show an example event. Either use a stdout { codec => rubydebug }
output or copy/paste a JSON snippet from the JSON tab in Kibana.
No Compatible Fields: The "packetbeat-*" index pattern does not contain any of the following field types: geo_point
And you've configured Logstash to store the events in packetbeat-* indexes? What do the mappings for such an index look like? Use Elasticsearch's get mapping API to find out.
transport:udp method:QUERY server: client_ip:202.134.26.36 client_port:40,251 client_proc: status:OK bytes_out:184 responsetime:0 query:class IN, type A, www.google.com count:1 ip:202.134.24.110 @timestamp:February 8th 2017, 20:26:19.184 type:dns direction:in bytes_in:32 beat.hostname:ns2.kalianet.to beat.name:ns2.kalianet.to port:53 dns.additionals:{ "class": "IN", "data": "216.239.32.10", "name": "ns1.google.com", "ttl": 121534, "type": "A" }, { "class": "IN", "data": "216.239.34.10", "name": "ns2.google.com", "ttl": 121534, "type": "A" }, { "class": "IN", "data": "216.239.36.10", "name": "ns3.google.com", "ttl": 121534, "type": "A" }, { "class": "IN", "data": "216.239.38.10", "name": "ns4.google.com", "ttl": 121534, "type": "A" } dns.additionals_count:4 dns.answers:{ "class": "IN", "data": "172.217.25.36", "name": "www.google.com", "ttl": 268, "type": "A" } dns.answers_count:1 dns.authorities:{ "class": "IN", "data": "ns3.google.com", "name": "google.com", "ttl": 118299, "type": "NS" }, { "class": "IN", "data": "ns4.google.com", "name": "google.com", "ttl": 118299, "type": "NS" }, { "class": "IN", "data": "ns1.google.com", "name": "google.com", "ttl": 118299, "type": "NS" }, { "class": "IN", "data": "ns2.google.com", "name": "google.com", "ttl": 118299, "type": "NS" } dns.authorities_count:4 dns.flags.authoritative:false dns.flags.recursion_allowed:true dns.flags.recursion_desired:true dns.flags.truncated_response:false dns.id:29,482 dns.op_code:QUERY dns.question.class:IN dns.question.name:www.google.com dns.question.type:A dns.response_code:NOERROR resource:www.google.com client_server: proc: @version:1 host:ns2.kalianet.to tags:beats_input_raw_event _id:AVocpq2K1VZH1ycwIW15 _type:dns _index:packetbeat-2017.02.08 _score:
is this what you meant ...
is this what you meant ...
No, but I think it's good enough in this case. It looks the geoip filter isn't able to look up 202.134.24.110. Is there anything about this in the logs? What happens if you use the default geoip database (i.e. comment out the database
option)? With the default database I'm certainly able to look up the address:
$ cat test.config
input { stdin { codec => plain } }
output { stdout { codec => rubydebug } }
filter {
geoip {
source => "message"
}
}
$ echo 202.134.24.110 | /opt/logstash/bin/logstash -f test.config
Settings: Default pipeline workers: 8
Pipeline main started
{
"message" => "202.134.24.110",
"@version" => "1",
"@timestamp" => "2017-02-08T08:36:23.215Z",
"host" => "lnxolofon",
"geoip" => {
"ip" => "202.134.24.110",
"country_code2" => "TO",
"country_code3" => "TON",
"country_name" => "Tonga",
"continent_code" => "OC",
"latitude" => -20.0,
"longitude" => -175.0,
"timezone" => "Pacific/Tongatapu",
"location" => [
[0] -175.0,
[1] -20.0
]
}
}
Pipeline main has been shutdown
stopping pipeline {:id=>"main"}
sorry magnus but i think i am not followed you .... i have remove the databases option. here is how it looks like now in my filter
geoip {
source => "ip"
target => "geoip"
database => "/etc/logstash/GeoLiteCity.dat"
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
}
mutate {
convert => [ "[geoip][coordinates]", "float"]
}
is it what you suggest ... should i add extra code etc etc
You should look in your Logstash log to see if there are clues about why the geoip filter is failing. Please also post all of your configuration. Format it as preformatted text using the </>
toolbar button.
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
geoip {
source => "ip"
target => "geoip"
database => "/etc/logstash/GeoLiteCity.dat"
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
}
mutate {
convert => [ "[geoip][coordinates]", "float"]
}
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
Here is my logstash config
I asked you to "format it as preformatted text using the </> toolbar button", yet you didn't. Why?
The problem is that you're only applying the geoip filter to events with the syslog type but your Packetbeat have another type.
sorry magnus but maybe i am not following you ... apologise for my misunderstand .. please could you let me know how to format as you mention here
Don't you have a toolbar just above the text area where you're typing your text?
i don't think so ... i am using the older version i guess as i could not install the latest version ... i think i am using kibana version 4
I am talking about https://discuss.elastic.co where you just typed a couple of sentences.
sorry for the toolbar i did not get it you meant for my cut and paste here ...
anyway ... any chance what can i do to find which type my packetbeat has ??
Just look at the event's type
field in Kibana.
sorry for that ... now i can see it ... how about the packetbeat evet type
it say dns ...