Logstash 2.0 HTTP Stability Issues


(Kevin Ross) #1

Hi,

I have updated to logstash 2.0, Elasticsearch 2.0 & Kibana 4.2 yesterday & I am having issues with the stability of logstash but first just a quick note about my environment:

  • I am running Centos 7. I have been running ELK on this system since May without issue and it is just a single node "cluster" which I use to feed in some of my bro logs and things. SELinux and things are disabled & my hardware and performance for my logging rate before was completely fine.
  • I am utilising the same ES configuration as I used in the 1.7.X version tree. It is bound to the loopback interface and logstash sends logs to it internally. On the outside (for access to thins like bigdesk/kopf) Apache is wrapped around it with a HTTPS password protected interface for the named virtualhost.
  • I am utilising the same base configuration with logstash as before.There is a lot more to it but this is the gist of it for the king of things in it (I don't get any errors in my config and logstash runs (this is just an example and I know it would never actually work like this but just to show the general flow of what I have):

input {
syslog {
port => 5514
type => bro_http
}
if [type] == "bro_http" {
grok {
match => { "message" => "%{BRO_HTTP}" }
add_tag => ["geoip_dst"]
}
}
filter {
if ("geoip_dst" in [tags]) {
geoip {
source => "dst_ip"
target => "geoip"
database => "/etc/logstash/GeoLiteCity.dat"
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
}
mutate {
convert => [ "[geoip][coordinates]", "float"]
}
}
}
output {
if [type] != "asa" {
elasticsearch {
hosts => "localhost:9200"
workers => "3"
}
}
if [type] == "asa" {
elasticsearch {
hosts => "localhost:9200"
index => "logstash-asa-%{+YYYY.MM.dd}"
workers => "2"
}
}

However it runs for about an hour and has these kinds of errors after a while and them stops logging into the database:

Attempted to send a bulk request to Elasticsearch configured at '["http://localhost:9200/"]', but an error occurred and it failed! Are you sure you can reach elasticsearch from this machine using the configuration provided? {:client_config=>{:hosts=>["http://localhost:9200/"], :ssl=>nil, :transport_options=>{:socket_timeout=>0, :request_timeout=>0, :proxy=>nil, :ssl=>{}},

As an alternative I wanted to try a node configuration so I can see logstash join as an actual node and see if that helps things but I could never get it to join with this kind of config (I have installed the logstash plugin for this).

if [type] != "cisco_asa" {
elasticsearch_java {
network_host => "127.0.0.1"
protocol => node
cluster => "cluster_name"
workers => "3"
}
}

Listeners (an NMAP scan of the loopback though shows on 9200 listening and not 9300 although not sure is this is normal).

  • # lsof -n -i4TCP:9300 | grep LISTEN java 28959 elasticsearch 353u IPv6 228409124 0t0 TCP 127.0.0.1:vrace (LISTEN)

However as I said it never connects as a node & the only way I get it logging it using HTTP even though it is unstable for me and generates errors but I am not sure how to determine what is causing this - if it wasn't connecting or sending in logs perhaps I could but the "working for an hour and then dies" I am not sure about. Does anyone have any ideas on how I can resolve this or what I may need to do with ? Thanks for any help.


(Kevin Ross) #2

sorry this is actually (had it in another config when I was messing trying to figure out initially what was wrong)

input {
udp {
port => 5514
type => bro_http
}


(Kevin Ross) #3

Just a quick update. After querying the error I found there may be issues with the geoip plugin (original config). here is one of them commented out. Now I have commented out enriching the data with geoip it seems to have not crashed out after the usual 5 minutes to 1 hour so fingers crossed. Is there something different now that should be done?

Example I had (commented out now)
#filter {
# if ("geoip_dst" in [tags]) {
# geoip {
# source => "dst_ip"
# target => "geoip"
# database => "/etc/logstash/GeoLiteCity.dat"
# add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
# add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
# }
# mutate {
# convert => [ "[geoip][coordinates]", "float"]
# }
# }
# }


(Aaron Mildenstein) #4

For starters, the GeoIP filter has had a "location" field for some time now. Your workaround to put lon/lat coordinates in is superfluous, including the "float" conversion.

See the docs in the code and the code that adds location for reference.

Aside from this, I can't see a reason why GeoIP should be causing you grief. You shouldn't need the "target =>" option, as that is already the default target, too. Have you looked to see if there's a mapping error in your Elasticsearch log file?


(Erik Stephens) #5

I can confirm that theuntergeek's advice to check your mapping is sound - happened to me. Should be able to see more specifics in the elasticsearch logs.


(Kevin Ross) #6

Great thanks. Does anyone have an example for a geoip filter which works
fine in Elasticsearch 2.0 as obviously something in my original one it did
not like if it kept falling over after a few mins or sometimes hours of
running.

Thanks.


(Erik Stephens) #7

Did you double-check your mapping? That was my issue. Should see details in elasticsearch logs why unable to index.


(Aaron Mildenstein) #8

I'm using GeoIP in my Logstash 2.0 and Elasticsearch 2.0 setup (with the template that is currently in master at the logstash-output-elasticsearch repository. It works fine with just this in my config:

geoip { source => "clientip" }

(Kevin Ross) #9

Thanks. I am still a bit lost actually (sorry I am relatively new and
hadn't had to touch this since I followed the original guide (
https://www.digitalocean.com/community/tutorials/how-to-map-user-location-with-geoip-and-elk-elasticsearch-logstash-and-kibana).
So based on the following geoip filters (I change them up depending with
the if statements and the source but aside from that they are the same):

filter {
if ("geoip_dst" in [tags]) {
geoip {
source => "dst_ip"
target => "geoip"
database => "/etc/logstash/GeoLiteCity.dat"
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
}
mutate {
convert => [ "[geoip][coordinates]", "float"]
}
}
}

Would it just be as simple as?

filter {
if ("geoip_dst" in [tags]) {
geoip { source => "dst_ip" }
}
}

I have set it to be this now though to try it although I am wondering if it
is something to do with the field names being for instance
"geoip.country_name" as I had issues on other logs with "." characters
being indexed.

Thanks for help with this; I use geoip enrichment a lot given this is for
some of my security logs so I make use of it in queries, visualisations and
general hunting.


(Kevin Ross) #10

Sorry looking over this I think maybe elasticsearch logs may have better
info so running again until it fails and will post logs (if any)


(Kevin Ross) #11

Hi,

There doesn't appear to be anything useful in elasticsearch logs to
indicate an error; however this seems very similar to this:

I am just running a single host with apache reverse proxy to the port
although everything is internal and this only happens when GeoIP plugin is
configured; if I comment out any geoip configuration it runs fine.


(Kevin Ross) #12

Sorry for the flurry of posts. I have found others experiencing this issue
https://github.com/logstash-plugins/logstash-output-elasticsearch/issues/273.
I would just disable geoip but given I use this for security data
enrichment it is very useful so still looking for what actually causes this
and how to get around it as I imagine others are using geoip plugin
completely fine.


(Aaron Mildenstein) #13

Yes, indeed.


(Aaron Mildenstein) #14

The current release does not use dot notation, but nested. Kibana 4.2 shows nested documents with dot notation, but the actual document is nested.

This is an example get from my Logstash 2.0/Elasticsearch 2.0/Kibana 4.2 setup:

curl -XGET blackbox:9200/logstash-2015.11.09/nginx_json/AVDtIdb6BwrQoCMje3mf?pretty
{
  "_index" : "logstash-2015.11.09",
  "_type" : "nginx_json",
  "_id" : "AVDtIdb6BwrQoCMje3mf",
  "_version" : 1,
  "found" : true,
  "_source":{"@timestamp":"2015-11-09T16:42:28.000Z","@version":"1","clientip":"207.46.13.148","bytes":6424,"duration":0.076,"status":200,"request":"/?cat=18","method":"GET","useragent":"Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)","file":"/var/log/nginx/REDACTED.access.log","host":"REDACTED.com","offset":"1651048","type":"nginx_json","geoip":{"ip":"REDACTED.148","country_code2":"US","country_code3":"USA","country_name":"United States","continent_code":"NA","region_name":"CA","city_name":"Beverly Hills","postal_code":"90210","latitude":34.099500000000006,"longitude":-118.4143,"dma_code":803,"area_code":310,"timezone":"America/Los_Angeles","real_region_name":"California","location":[-118.4143,34.099500000000006]},"name":"bingbot","os":"Other","os_name":"Other","device":"Spider","major":"2","minor":"0"}
}

(system) #15