Logs appear in ES with a delay

I have a setup with Elastic Search, Log Stash, Kibana and Filebeat with versions as shown below.

  1. Elastic Search: 6.2.4
  2. LogStash: 6.2.4
  3. kibana: 6.2.4

My Logstash pipeline is as below:

sudo vim /etc/logstash/conf.d/02-beats-input.conf
input {
beats {
port => "5044"
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}

filter {
#If log line contains tab character followed by 'at' then we will tag that entry as stacktrace
if [message] =~ "\tat" {
grok {
match => ["message", "^(\tat)"]
add_tag => ["stacktrace"]
}
}

#Grokking Spring Boot's default log format
grok {
match => { "message" => "(?%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{TIME}) %{LOGLEVEL:level} %{NUMBER:pid} --- [(?[A-Za-z0-9-]+)] [A-Za-z0-9.].(?[A-Za-z0-9#_]+)\s:\s+(?.*)"}
}

#grok {

match => { "message" => "(?%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{TIME}) %{LOGLEVEL:level} %{NUMBER:pid} --- .+? :\s+(?.*)"}

#}

grok {
match => { "source" => "/var/log/containers/%{DATA:pod_name}%{DATA:namespace}%{GREEDYDATA:container_name}-%{DATA:container_id}.log" }
remove_field => ["source"]
}

sudo vim /etc/logstash/conf.d/30-elasticsearch-output.conf
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}

Elastic search, Kibana, Logstash are running in one VM where as application and Filebeat are running on a different VM.

Currently there is a discrepancy with date between these two VMs.

The latest log on kibana discover is as below with a specific time stamp.
message:
{"log":"TACACS+: No port assigned for host, "XX.XX.XX.XX". Using default port 49 instead.\n","stream":"stdout","time":"2018-05-17T00:58:09.401752809Z"}
@timestamp:
May 16th 2018, 17:58:09.408

The latest log at the application as below with a specific time stamp.
{"log":"TACACS+: No port assigned for host, "XX.XX.XX.XX". Using default port 49 instead.\n","stream":"stdout","time":"2018-05-17T06:06:44.365607578Z"}

If you see the above two logs, it is clear that kibana is showing up the logs with some delay, particularly in the above case it has delay of around 5 hours. I also see that the delay is keep getting incremented. I see all the logs on the kibana though. Issue is the delay i am seeing.

Could some one help me understand this behavior ? Is this because the time discrepancy between the two VMs ? both are at PDT time zone. The logs size should be small enough and I dont expect any throttling kicks in.

Please let me know if you need any other details on this.

@Bargs?

Thanks,
Bhavya

Could you query elasticsearch directly and confirm the latest documents are in ES? That will help us at least narrow down where the problem lies.

Also, which field did you select as your time field when creating the index pattern in Kibana? time or @timestamp? Most likely you'll want time, with filebeat @timestamp will be the time filebeat read the file by default.

Sorry for the delay in updating.

When I create the index pattern, It shows me only two options.

  1. @timestamp
  2. I dont want to use the time filter.

I don't see any other field to select. What am I missing here ?

another problem I observe is that, I see that kibana reports that no logs are present for certain time duration. For example, it doesn't show any logs for 3 to 4 hours and then start again showing the logs.

Also, please let me know if you need any more details on this.

Any help here in moving forward on this ? I am still struck on this particular thing.

Hey @Kiran_Kumar, you don't have to choose any fields besides which one is the time filter field.

As previously requested by @Bargs, if you query Elasticsearch directly, are you seeing entries for the documents that aren't showing up in Kibana?

If you’re not seeing time as an option at index pattern creation time then it’s likely not mapped as a date field in the elasticsearch index, and you’ll want to fix that.

Thanks for the reply. I really appreciate. Sorry for coming back late. My ELK VM has some issues and it took some time to bring this back up and get the logs.

At a given point of time, I have the below application log.
==============================================
{"log":"TACACS+: Connected to server at XX.X.XX.Xx:49\n","stream":"stdout","time":"2018-05-23T04:25:49.635647336Z"}
{"log":"2018-05-23 04:25:49.638 ERROR 1 --- [nio-8080-exec-3] c.i.x.p.a.s.s.r.UserRepositoryImpl : User (user) not authenticated.\n","stream":"stdout","time":"2018-05-23T04:25:49.638485876Z"}

At this time the latest log in kibana is as below.
======================================
@version:
_ 1_
message:
_ {"log":"TACACS+: Connected to server at XX.XX.XX.XX:49\n","stream":"stdout","time":"2018-05-22T23:15:05.623242706Z"}_
input_type:
_ log_
fields:
_ - _
host:
_ vm1.com_
@timestamp:
_ May 22nd 2018, 16:15:05.952_
count:
_ 1_
tags:
_ beats_input_codec_plain_applied, grokparsefailure
offset:
_ 537,440,752_
type:
_ log_
id:
_ qT0biWMBnw-iQXGI-JNV

container_name:
auth-server
type:
_ doc

index:
_ logstash

My query to Elastic Search is as below
================================
curl -X GET "localhost:9200/_search?pretty" -H 'Content-Type: application/json' -d'
{
"query": {
"bool": {
"must": [
{
"match": {
"container_name": "auth-server"
}
},
{
"range": {
"@timestamp": {
"gte": "now-1d/d",
"lt": "now/d"
}
}
}
]
}
}
}'

The output is as below.

First Entry returned
================
{
"took" : 38,
"timed_out" : false,
"_shards" : {
"total" : 81,
"successful" : 81,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : 517753,
"max_score" : 5.4782195,
"hits" : [
{
"_index" : "logstash-2018.05.21",
"_type" : "doc",
"_id" : "zC3rfmMBnw-iQXGIdxTw",
"_score" : 5.4782195,
"_source" : {
"@version" : "1",
"message" : "{"log":"TACACS+: No port assigned for host, \"XX.XX.XX.XX\". Using default port 49 instead.\n","stream":"stdout","time":"2018-05-21T00:00:15.128475028Z"}",
"input_type" : "log",
"fields" : null,
"@timestamp" : "2018-05-21T00:00:15.671Z",
"count" : 1,
"tags" : [
"beats_input_codec_plain_applied",
"_grokparsefailure"
],
"container_name" : "auth-server",
"type" : "log",
"beat" : {
"name" : "vm1.com",
"hostname" : "vm1.com"
}
}
},

Last Entry returned
================
{
"_index" : "logstash-2018.05.21",
"_type" : "doc",
"_id" : "xy3rfmMBnw-iQXGItxWg",
"_score" : 5.4782195,
"_source" : {
"@version" : "1",
"message" : "{"log":"2018-05-21 00:00:31.486 ERROR 1 --- [nio-8080-exec-5] c.i.x.p.a.s.s.r.UserRepositoryImpl : User (user) not authenticated.\n","stream":"stdout","time":"2018-05-21T00:00:31.486797208Z"}",
"fields" : null,
"input_type" : "log",
"host" : "vm1.com",
"@timestamp" : "2018-05-21T00:00:31.681Z",
"count" : 1,
"tags" : [
"beats_input_codec_plain_applied",
"_grokparsefailure"
],
"offset" : 514704883,
"container_name" : "auth-server",
"type" : "log",
"beat" : {
"name" : "vm1.com",
"hostname" : "vm1.com"
}
}
}
]
}

At this moment, I am not quite sure if Elastic Search has the recent logs or not as they are not showing up with my query. However I suspect I am doing something wrong w.r.t query, and I have given the query that I am using here. If the query is fine, then the logs must be missing from Elastic Search. Either case, if you have any idea on what is happening or how can this be debugged further, Could you please let me know if you have any idea on fixing this behavior ?

I rebooted the VM, adjusted the time zones etc., but none fixed this behavior.

How to map this as a date field in Elasticsearch index ? Currently I have the below information in my grok filters.
#Parsing out timestamps which are in timestamp field thanks to previous grok section
date {
match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss.SSS" ]
}

Is this enough or Am I missing anything ?

Try removing the time range query and sorting by time to get the most recent docs in ES.

I don't actually see a timestamp field that your date filter would match, so that seems like a red flag to me. If you re-try the query and find out the docs are not in ES I'll move this over to the logstash forum where you'll get better help with your config.

Thanks for helping me out on this. Yes, the documents are not there in ES as well. I mean the delay is present between the actual Logs and Logs stored in ES as well, but the logs in Elastic were delayed by upto 1 hour where as in Kibana they show up much later. Almost 3.5 hours delay between the actual logs and logs in Kibana.

You are also right that timestamp field is not present in the ES logs. Only @timestamp is present. Could you please point me to right forum and people for this ?

On the same note, I use the below query and the output of this command is pretty much same. Does that mean logs are not actually getting stored/updated in ES for long time ?

sudo curl -X GET "localhost:9200/filebeat-2018.05.24/search?pretty" -H 'Content-Type: application/json' -d'
{
_ "query": {

_ "match": {_
_ "container_name": "auth-server"_
_ }_
_ } _
}'

Thanks for checking, I moved this thread to the logstash forum so folks more experienced with logstash config can take a look.

@Bargs

I moved this back to Kibana. I fixed one error and also got the right command to search the latest logs from Elastic Search. I see that Elastic search has the latest documents.

I restarted all the component after fixing the error and kibana still shows the older documentsi.e. logs with certain amount of delay. I am creating the index pattern still with @timestamp as time filter.

Could you please let me know if you need any other details and how to move forward on this.

To be precise, this is what i meant. Both application and ES logs shows the same time stamp, where as Kibana is lagging behind with some delay. Hope this helps.

1) Latest log from Application
{"log":"2018-05-25 03:40:27.624 ERROR 1 --- [nio-8080-exec-9] c.i.x.p.a.s.s.r.UserRepositoryImpl : User (user) not authenticated.\n","stream":"stdout","time":"2018-05-25T03:40:27.625047623Z"}

2) Latest log from ES
[admin@vm2 etc]$ curl -XGET "localhost:9200/filebeat-2018.05.25/_search?pretty" -H 'Content-Type: application/json' -d'
{
"query": {
"match_all": {}
},
"size": 1,
"sort": [
{
"@timestamp": {
"order": "desc"
}
}
]
}'

{
"took" : 6,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : 84348,
"max_score" : null,
"hits" : [
{
"_index" : "filebeat-2018.05.25",
"_type" : "log",
"_id" : "llRblGMBNkfFqwxIWGN6",
"_score" : null,
"_source" : {
"type" : "log",
"input_type" : "log",
"class" : "UserRepositoryImpl",
"offset" : 23978635,
"@timestamp" : "2018-05-25T10:40:27.624Z",
"container_name" : "auth-server",
"@version" : "1",
"message" : "{"log":"2018-05-25 03:40:27.624 ERROR 1 --- [nio-8080-exec-9] c.i.x.p.a.s.s.r.UserRepositoryImpl : User (user) not authenticated.\n","stream":"stdout","time":"2018-05-25T03:40:27.625047623Z"}",
"tags" : [
"beats_input_codec_plain_applied"
],
"count" : 1,
"logmessage" : "User (user) not authenticated.\n","stream":"stdout","time":"2018-05-25T03:40:27.625047623Z"}",
"level" : "ERROR",
"timestamp" : "2018-05-25 03:40:27.624",
"pid" : "1",
"thread" : "nio-8080-exec-9",
"fields" : null,
},
"sort" : [
1527244827624
]
}
]
}
}

  1. Latest logs from Kibana
    {
    "_index": "filebeat-2018.05.24",
    "_type": "log",
    "_id": "eFFWk2MBNkfFqwxI-xQA",
    "_version": 1,
    "_score": null,
    "_source": {
    "fields": null,
    "offset": 21709261,
    "type": "log",
    "count": 1,
    "tags": [
    "beats_input_codec_plain_applied",
    "_grokparsefailure"
    ],
    "@version": "1",
    "input_type": "log",
    "@timestamp": "2018-05-24T22:59:31.090Z",
    "message": "{"log":"2018-05-24 22:59:30.900 ERROR 1 --- [nio-8080-exec-7] c.i.x.p.a.s.s.r.UserRepositoryImpl : User (user) not authenticated.\n","stream":"stdout","time":"2018-05-24T22:59:30.902344786Z"}",
    },
    "fields": {
    "@timestamp": [
    "2018-05-24T22:59:31.090Z"
    ]
    },
    "sort": [
    1527202771090
    ]
    }

Sorry for the delay @Kiran_Kumar, I was out on vacation.

If you open up your browser's dev tools when querying in Kibana, you should be able to see an msearch request in the network tab. This contains the actual request Kibana is making to ES. I would suggest copying this query, pasting it into kibana's Dev Tools app and playing around with it to see why it's not pulling back the latest docs.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.