Unable see logs In kibana after 15 minutes

Hi there,
I am new in ELK, and I setup the ELK first time on my local machine. After restarting filebeat on client i am able to see logs on kibana dashboard. But after 15 mins. I am unable to see any logs and getting "No results found" i tried very hard to figure out what is happening but fails.

I also tried recreating index but getting same issue.

i am also looking to expand time range but don't know how to do that.

Please help me resolve this:

Kibana version: 4.5.4
Elasticsearch version: 2.X
logstash Version : 2.2

In Kibana 4.5 you can set the time range in the right upper corner. In the tab "Quick" you can find the time range "Today". Click on it, perhaps you´ve selected a time range while clicking on a control in a dashboard.

Do you find any messages on the page "Discover" ?

Yes
In Quick->Today
I found the logs which are available previously in Discover.

But for Quick->Last 15 minutes
i am getting the "No results found".

Is the filebeat still running and producing log? Might there just not be any logs in the past 15 minutes?

If you are 100% sure, it might still be an issue with timezones, that the filebeat is delivering the time of the events in another timezone than your Kibana/Browser is set up to.

Filebeat is in active(running) state.

For the timezone client machine (where filebeat is running) is having UTC time zone.
and the browser i am accessing kibana showing log status with local time zone.

Try to go to Management > Kibana > Advanced Settings and switch the dateFormat:tz setting to UTC too.

1 Like

Did the changes but still unable to see latest log.

timezone setting of UTC reflected in Discover(log messages).

Is Logstash running? Have you set any filters that affect that messages don´t send to kibana or in a other index?

Look into the logs of logstash perhaps logstash can not connect to your ES.

Logstash is in running state, and by looking into the status(logstash.log,logstash.err) no errors found in logstash.

I haven't set any filters.

I guess it is not fetching real time log unless and until i restart the filbeat service on client node.

But all the Elasticsearch, logstash are working fine.Filebeat also. But still not able to fetch recent logs.

I do not believe you did that, but it could be a potential source of error: did you create a scripted field to calculate something (whose value is not always given)?

i haven't created any script.

I tried it on another environment also but facing the same issue.

I referred the below document for installation and configuration.

"https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elk-stack-on-ubuntu-14-04"

I see you have a filter in the logstash config where you look for syslogs etc.
I don´t know but perhaps your filter is misconfigured? Check this so far.

Do you use your ELK Stack for production ? If not, you can configure a another index for your data, to see if you got data from your system.

Thank you for guidelines.

I updated logstash config file with new filter. And also updated the filebeat.yml with same name.
But still facing the same issues.

filter {
if [type] == "DataLogs" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:DataLogs_timestamp} %{SYSLOGHOST:DataLogs_hostname} %{DATA:DataLogs_program}(?:[%{POSINT:DataLogs_pid}])?: %{GREEDYDATA:DataLogs_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "DataLogs_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}

Can you please tell me how to configure logstash without applying filter?

Or any default configuration for logstash.

If you want no filter you don´t need to write anything in the filter section of the logstash config. You just need the following lines

Filter-Section
filter{
}

and thats it, now you have no filter in your config.

But please think about it, grok match is a pattern which makes your data structured and queryable and without this you get the blank data send into your ES.

After setting no fileter, i am facing same issue.The Kibana is not updating the latest logs.
I think the problem is not with logstash filter.

Okay... can you send me the indices you have created/configured in Kibana ?
Oh and it would be nice if you send me your complete logstash config, so I can see what logstash is doing with your data.


Above image is for indices with fields.

And 3 configurations files of logstash as below

root@ubuntu-xenial:/etc/logstash/conf.d# cat 02-beats-input.conf
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}

root@ubuntu-xenial:/etc/logstash/conf.d# cat 10-syslog-filter.conf
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}

root@ubuntu-xenial:/etc/logstash/conf.d# cat 30-elasticsearch-output.conf
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}

Can you run this on the server where ES is running ?
curl 'localhost:9200/_cat/indices?v'
OR
curl -XGET http://localhost:9200/_cat/indices?v

This will return all indices in ES. Please post the return value here.