Hi,
I have updated to logstash 2.0, Elasticsearch 2.0 & Kibana 4.2 yesterday & I am having issues with the stability of logstash but first just a quick note about my environment:
- I am running Centos 7. I have been running ELK on this system since May without issue and it is just a single node "cluster" which I use to feed in some of my bro logs and things. SELinux and things are disabled & my hardware and performance for my logging rate before was completely fine.
- I am utilising the same ES configuration as I used in the 1.7.X version tree. It is bound to the loopback interface and logstash sends logs to it internally. On the outside (for access to thins like bigdesk/kopf) Apache is wrapped around it with a HTTPS password protected interface for the named virtualhost.
- I am utilising the same base configuration with logstash as before.There is a lot more to it but this is the gist of it for the king of things in it (I don't get any errors in my config and logstash runs (this is just an example and I know it would never actually work like this but just to show the general flow of what I have):
input {
syslog {
port => 5514
type => bro_http
}
if [type] == "bro_http" {
grok {
match => { "message" => "%{BRO_HTTP}" }
add_tag => ["geoip_dst"]
}
}
filter {
if ("geoip_dst" in [tags]) {
geoip {
source => "dst_ip"
target => "geoip"
database => "/etc/logstash/GeoLiteCity.dat"
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
}
mutate {
convert => [ "[geoip][coordinates]", "float"]
}
}
}
output {
if [type] != "asa" {
elasticsearch {
hosts => "localhost:9200"
workers => "3"
}
}
if [type] == "asa" {
elasticsearch {
hosts => "localhost:9200"
index => "logstash-asa-%{+YYYY.MM.dd}"
workers => "2"
}
}
However it runs for about an hour and has these kinds of errors after a while and them stops logging into the database:
Attempted to send a bulk request to Elasticsearch configured at '["http://localhost:9200/"]', but an error occurred and it failed! Are you sure you can reach elasticsearch from this machine using the configuration provided? {:client_config=>{:hosts=>["http://localhost:9200/"], :ssl=>nil, :transport_options=>{:socket_timeout=>0, :request_timeout=>0, :proxy=>nil, :ssl=>{}},
As an alternative I wanted to try a node configuration so I can see logstash join as an actual node and see if that helps things but I could never get it to join with this kind of config (I have installed the logstash plugin for this).
if [type] != "cisco_asa" {
elasticsearch_java {
network_host => "127.0.0.1"
protocol => node
cluster => "cluster_name"
workers => "3"
}
}
Listeners (an NMAP scan of the loopback though shows on 9200 listening and not 9300 although not sure is this is normal).
- # lsof -n -i4TCP:9300 | grep LISTEN java 28959 elasticsearch 353u IPv6 228409124 0t0 TCP 127.0.0.1:vrace (LISTEN)
However as I said it never connects as a node & the only way I get it logging it using HTTP even though it is unstable for me and generates errors but I am not sure how to determine what is causing this - if it wasn't connecting or sending in logs perhaps I could but the "working for an hour and then dies" I am not sure about. Does anyone have any ideas on how I can resolve this or what I may need to do with ? Thanks for any help.