No results found in Kibana

Hi,

I just install all the latest kibana 4.5 / elasticsearch and logstash on a ubuntu server.

I'm sending logs from syslog to the server but kibana still say that it can't find results.

here is the conf file :slight_smile:

`input {
lumberjack {
port => 514
type => "logs"
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}

filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGLINE}" }
}

date {

match => [ "timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}

}

output {
elasticsearch { hosts=> localhost index => "logstash-%{+YYYY.MM.dd}" }
stdout { codec => rubydebug }
}output {
elasticsearch { hosts => localhost index => "logstash-%{+YYYY.MM.dd}" }
stdout { codec => rubydebug }'

Could you help me to troubleshoot this?

How can i first check if logs arrive to the server, if logstash receive it and if it does somthing with it.

Thx

For starters, you could take Kibana out of the equation and test if the data is in Elasticsearch. To do this check if the logstash- indices you expect are even there in Elasticsearch by running:

curl http://localhost:9200/_cat/indices

If the indices you expect to see show up, then do a search on them to see if there's the expected data in them:

curl http://localhost:9200/logstash-*/_search

apparently the indices, i'm waiting for is not there :slight_smile:
administrator@ELKibana4:~$ curl http://localhost:9200/_cat/indices yellow open .kibana 1 1 104 1 101kb 101kb yellow open blog 5 1 1 0 3.6kb 3.6kb

Okay, so then something's not correct on the shipping side as the documents don't even seem to be getting to Elasticsearch. I'm moving this to the Logstash category as that is more appropriate at this point.

Multiple problems:

  • You can't use the lumberjack input to receive data over the syslog protocol. Use a syslog, udp, or tcp input (depending on how the sender sends the data).
  • Unless you run Logstash as root or take other special measures you won't be able to listen on a port below 1024.

Could you explain me how to be able to receive log on a port bellow 1024, without runing logstash as root ?

I've set this up but i didn't have any indice too:

input{
tcp {
port => 1514
tags => [IPO]
type => cdr
}
}

Could you explain me how to be able to receive log on a port bellow 1024, without runing logstash as root ?

You can use iptables to reroute the port and you should be able to adjust the process's capabilities (but I recall that being problematic with the JVM). I don't have any details as I've never done it myself.

I've set this up but i didn't have any indice too:

Be systematic. Forget about Kibana and ES for now. Comment out the elasticsearch output and focus on the stdout output. Does that make a difference? Does anything happen if you send stuff to the listening port with telnet or netcat?

I just try to connect with telnet on port 5514 ( port i've set up in the conf file) i can't connect with telnet.

Do i have just to telnet to the server ip on this port ?

Do i have just to telnet to the server ip on this port ?

Yes. That should work. Make sure Logstash starts up correctly and that there's no firewall blocking the access.

ok, it looks to receive logs now, i'v recreate the conf file.

Now when i add the filter type =cdr, i didn't get any logs any more, but configtest give ok.

here is the file:

input {
tcp {
port => 1514
type => cdr
}
udp {
port => 1514
type => syslog
}
}

filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
if [type] == "cdr" {
csv {

separator => ","
columns => [
"Call_Start", "Connected_Time", "Ring_Time", "Caller", "Direction", "Called_Number", "Dialed_Number", "Account", "Is_Internal", "Call_ID", "Continuation", "Party1device", "Party1Name", "Party2Device", "Party2Name", "Hold_Time", "Park_Time", "AuthValid", "AuthCode", "User_Charged", "Call_Charge", "Currency", "Amount_at_last_User_Change", "Call_Units", "Units_at_Last_User _hange", "Cost_per_Units", "Mark_up", "External_Targeting_Cause", "External_targeter_ID", "External_Targeted_Number"
]

}
ruby
{
code => "event['Duration'] = event['Connected_Time'] ? event['Connected_Time'].split(':').inject(0){|a, m| a = a * 60 + m.to_i} : 0"
}
ruby
{
code => "event['Sonnerie'] = event['Ring_Time']"
}

mutate
{
convert => {"Sonnerie" => "integer"}
convert => {"Duration" => "integer"}
}
}
}
output {
elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }
}

don't take care of my previus message, it looks to works :slight_smile:

I think i'm not waiting enough

yesterday i was receiving log but today since 00:12:33, i didn't receive any logs anymore from TCP port 1514.

logs from TCP 1514 are SMDR, the same device can send logs UDP on port 1514 :s

I need to be able to receive the 2.

I didn't change anything in the config since yesterday .

Any idea why i can receive logs from UDP and not from TCP ?

is it something in ubuntu which can block traffic from a source if it's sending too many data ?

looks to be a issue with the device :slight_smile:
i've chaneg te port - Save reset the port - save and now i receive the smdr again ...