Hello,
I've downloaded the binary tar of logstash 1.5.0 and I'm trying to get it working with a 3 node elasticsearch cluster running on ES 1.5.1. I have kibana 3 running under nginx.
But when I go to my web browser and hit the logstash page I get the following error:
No results There were no results because no indices were found that match your selected time span
I have this in my logstash config file:
input {
tcp {
port => 5000
type => syslog
}
udp {
port => 5000
type => syslog
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOG5424PRI}%{NONNEGINT:syslog5424_ver} +(?:%{TIMESTAMP_ISO8601:syslog5424_ts}|-) +(?:%{HOSTNAME:syslog5424_host}|-) +(?:%{NOTSPACE:syslog5424_app}|-) +(?:%{NOTSPACE:syslog5424_proc}|-) +(?:%{WORD:syslog5424_msgid}|-) +(?:%{SYSLOG5424SD:syslog5424_sd}|-|) +%{GREEDYDATA:syslog5424_msg}" }
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
if !("_grokparsefailure" in [tags]) {
mutate {
replace => [ "@source_host", "%{syslog_hostname}" ]
replace => [ "@message", "%{syslog_message}" ]
}
}
mutate {
remove_field => [ "syslog_hostname", "syslog_message", "syslog_timestamp" ]
}
}
}
output {
elasticsearch {
host => "3.3.86.252"
embedded => false
cluster => "optl_elasticsearch"
}
stdout { codec => rubydebug }
}
I enabled stdout { codec => rubydebug } and then started up logstash. This is what I saw in the output:
May 25, 2015 8:26:50 PM org.elasticsearch.node.internal.InternalNode
INFO: [logstash-aoadbld00032lb.stg-tfayd.com-638-7952] version[1.5.1], pid[638], build[5e38401/2015-04-09T13:41:35Z]
May 25, 2015 8:26:50 PM org.elasticsearch.node.internal.InternalNode
INFO: [logstash-aoadbld00032lb.stg-tfayd.com-638-7952] initializing ...
May 25, 2015 8:26:50 PM org.elasticsearch.plugins.PluginsService
INFO: [logstash-aoadbld00032lb.stg-tfayd.com-638-7952] loaded [], sites []
May 25, 2015 8:26:52 PM org.elasticsearch.node.internal.InternalNode
INFO: [logstash-aoadbld00032lb.stg-tfayd.com-638-7952] initialized
May 25, 2015 8:26:52 PM org.elasticsearch.node.internal.InternalNode start
INFO: [logstash-aoadbld00032lb.stg-tfayd.com-638-7952] starting ...
May 25, 2015 8:26:53 PM org.elasticsearch.transport.TransportService doStart
INFO: [logstash-aoadbld00032lb.stg-tfayd.com-638-7952] bound_address {inet[/0.0.0.0:9301]}, publish_address {inet[/3.3.86.253:9301]}
May 25, 2015 8:26:53 PM org.elasticsearch.discovery.DiscoveryService doStart
INFO: [logstash-aoadbld00032lb.stg-tfayd.com-638-7952] optl_elasticsearch/6Nk_EgHNSRuVGqCK3-G0cA
May 25, 2015 8:26:56 PM org.elasticsearch.cluster.service.InternalClusterService$UpdateTask run
INFO: [logstash-aoadbld00032lb.stg-tfayd.com-638-7952] detected_master [NODE_1][cvCVyz8pQbSPIztlIKgB5A][aoadbld00032la.stg-tfayd.com][inet[/3.3.86.252:9300]]{master=true}, added {[NODE_2][Kf8ntDnpRNej-NziHsWxLg][aoadbld00032lb.stg-tfayd.com][inet[/3.3.86.253:9300]],[NODE_3][zkonnfDGTKKntUyDcFpSqA][aoadbld00032lc.stg-tfayd.com][inet[/3.3.86.254:9300]],[NODE_1][cvCVyz8pQbSPIztlIKgB5A][aoadbld00032la.stg-tfayd.com][inet[/3.3.86.252:9300]]{master=true},}, reason: zen-disco-receive(from master [[NODE_1][cvCVyz8pQbSPIztlIKgB5A][aoadbld00032la.stg-tfayd.com][inet[/3.3.86.252:9300]]{master=true}])
May 25, 2015 8:26:56 PM org.elasticsearch.node.internal.InternalNode start
INFO: [logstash-aoadbld00032lb.stg-tfayd.com-638-7952] started
I can see in the startup output above that Logstash is already aware of the elasticsearch cluster we have up and running.
I'm viewing the elasticsearch cluster using the kopf plugin. And when I look there I the 3 elasticsearch data nodes reporting in, and when I start up logstash I see an additional node turn up that's called: logstash-aoadbld00032lb.stg-tfayd.com-638-7952
So I would think that it's likely that ES is communicating with LS. However when I curl the indexes for the ES cluster I don't see that any logstash indexes have been created:
[root@aoadbld00032lb ~]# curl http://localhost:9200/_aliases?pretty
{
"login" : {
"aliases" : { }
},
"security" : {
"aliases" : { }
}
}
The only indexes there are from a programmer who's not involved in the logstash aspect.
The primary node of the ES cluster is sitting on a neighboring machine, not the localhost. I've only ever gotten LS working where ES was sitting on the same machine.
Anyone have any tips on how I can troubleshoot this and get this working? It'll be a beautiful moment to see info from the logs flowing into kibana!! Assuming that can happen with this setup!