Logstash parsing two days behind


(Kenneth Mroz) #1

Hello, I'm forwarding logs from a remote server to my docker container containing the elk stack and after looking at the logs, it seems they are two days behind. Any reason why this is? Is there a way to fix this?
Thanks


(Magnus Bäck) #2

You're not giving us enough details for a useful answer. What's the origin of the logs, on-disk files? Are those logs actually being updated? Is Logstash monitoring the right files? What's in the sincedb files? How about the Logstash and Elasticsearch logs? I'd also make sure that the timestamp of the logs is correct. With an incorrect date filter you could have fresh logs being inserted in a two-day old index.


(Kenneth Mroz) #3

the log file is one that is constantly getting updated. Logstash has to be pointing to these since they are showing up just a couple days behind. %{HTTPDATE:date} is the filter i am using in logstash. There are no grok parse failures. Elasticsearch is connected and running fine.


(Magnus Bäck) #4

I'd compare Logstash's current position according to sincedb against the actual size and inocde numbers of the files.

%{HTTPDATE:date} is the filter i am using in logstash.

Okay, but what does the date filter look like? That's what determines what ends up in @timestamp.


(Kenneth Mroz) #5

@timestamp is todays date. The date within the logs is two days prior. When i go onto the remote server and check the logs the date matches what is in the @timestamp.


(Magnus Bäck) #6

Okay. And what if you compare the sincedb files with the log files being read? Is Logstash behind there too? Is it catching up in any way?


(Kenneth Mroz) #7

which sincedb files?


(Magnus Bäck) #8

The files where Logstash records the current position in each input file. See the file input documentation for details.


(Kenneth Mroz) #9

im using the lumberjack filter not file though.


(Magnus Bäck) #10

But you use the file input on the remote server where you have the HTTP logs, right?


(Kenneth Mroz) #11

"network": {
"servers": [ "IP" ],
"timeout": 15,
"ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"

},

"files": [
{
"paths": [
"/var/log/logfile.log"
],
"fields": { "type": "logs1" }
}
]
}
this is how my forwarder looks on the remote server.. i dont have anything else running on here for the elk stack.


(Magnus Bäck) #12

Oh, right, you use LSF. Well, it has its own state file named .logstash-forwarder (IIRC) that works more or less the same as sincedb (maybe even the same format).


(Kenneth Mroz) #13

do you recommend switching to something different than the forwarder?


(Magnus Bäck) #14

I'd debug this for a big longer, but you should use whatever works for you. LSF is being deprecated in favor of Filebeat. Log Courier is another option (a fork of LSF).


(system) #15