Index not visble in Kibana


(Erik Heskes) #1

Hi there,

I've noticed that mu Indexes aren't being added to Kibana anymore. I do have changed my Logstash conf files in the meantime. But still then I would only expect GROK parse failures and not my index to dissapear..
This is the config:

input {
udp {
port => 5514
type => syslog
}

tcp {
port => 5514
type => syslog
}
}

filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}

output {
elasticsearch { hosts => ["localhost:9200"] index => "logstash-syslog" }
stdout { codec => rubydebug }
}


(Chris Roberson) #2

Hi @heskez,

Can you verify the indices exist in Elasticsearch? Do you have appropriate index patterns setup within Kibana that match those Elasticsearch indices? If the data is in Elasticsearch, Kibana should be able to see it, but if the data is not in Elasticsearch, you might have issues on the Logstash side


(Erik Heskes) #3

Hi @chrisronline how can I verify the indices exist in Elasticsearch?


(Chris Roberson) #4

In Kibana, go to the Dev Tools page and run this query:

GET _cat/indices

You should see the indices there if they exist in Elasticsearch.


(Erik Heskes) #5

Great thx, I think they're not there:

green open .monitoring-es-6-2018.04.11 GWH5uCuOQjGjdSkf22ZoUw 1 0 1839 12 1mb 1mb
yellow open syslog-2018.12.22 ULJz3m2jSxS7kFrcJPsQXQ 5 1 1 0 13.9kb 13.9kb
yellow open logstash-2018.12.22 glSsUzStTrOcL3u8U0TxtA 5 1 1 0 12.2kb 12.2kb
green open .watches n0HQZ2pKT4GTvfMHrOeo2g 1 0 6 0 32.9kb 32.9kb
green open .monitoring-alerts-6 whBL9bysR7au2_1bBSMtPQ 1 0 1 0 6.1kb 6.1kb
yellow open logstash-2018.12.23 gJgTS6i5SSeYa8mlK7eO7g 5 1 3 0 36.3kb 36.3kb
green open .security-6 ZXbp_DODSouFZamobe3Wdg 1 0 3 0 9.8kb 9.8kb
green open .kibana FtIuYpWUSV-pZ78VskTTnw 1 0 1 0 3.7kb 3.7kb
yellow open syslog-2018.04.11 OyXhG8wjT9i3kPjGCU24Lw 5 1 5 0 25.4kb 25.4kb
close .watcher-history-7-2018.04.11 01Z34Tk8SoCWrrQj_oFlYA
green open .triggered_watches dm4mpxy_Q4GTwLJPVjQ7ng 1 0 0 0 15.5kb 15.5kb
yellow open syslog-2018.12.23 PtgRZgsQTC-z3BIBYPTPqg 5 1 3 0 28.8kb 28.8kb
yellow open logstash-2018.04.11 yXMKHnulSHOimPlNwsgUXw 5 1 5 0 22.7kb 22.7kb


(Chris Roberson) #6

Ah yea. It sounds like an issue elsewhere. Verify the data is coming into Logstash and then verify it's properly going to your Elasticsearch cluster.


(Erik Heskes) #7

Yes how? :slight_smile:

[TCPDUMP shows incoming data on correct port]
[Logstash/Elasticsearch/Kibana instances are running]
[output debug logs look also good]


(Chris Roberson) #8

Try posting in https://discuss.elastic.co/c/logstash as they'll be able to help more.


(Erik Heskes) #9

This one has been solved, it was a local firewall issue


(system) #10

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.