Messages always last 2,33 minutes since I put REDIS

Hello,
I changed my log architecture for this one, because I wanted to be able to handle more messages per second, I placed REDIS in one server and another server is taken the messages from there.

Now I have a very high latency problem. Every message last 2,33 minutes since I send the message until it appear in Kibana.

What should I do to increase the speed?

I have tried with 2,4, and 8 workers, but I obtain the same results.

How are you tracking this?

Every message has it own timestamp when the user is sending it, and in the second logstash instance I put another tag with a second timestamp. The difference between both of them is always exactly 2.3333 minutes (140 seconds).
In my previous version (without using REDIS and ELK stack in only one machine) I used to have a 10 ms of difference, which was OK, but I used to lose messages when I have a high rate of data.
I have tried it with several configuration of workers, and I obtain the same results.
Here is a graph with the difference between tags.

What's your config look like?

It seems odd to have such a consistent lag. Is the server time synchronised between the servers assigning timestamps?

1 Like

This is the config file in the first machine:

input
 {
    udp {
        type => "udp"
        port => 30000
    }
   gelf {
        type =>"log4j"
        port =>4560
        host =>"0.0.0.0"
        }
 }

output {
        redis{
                host => "127.0.0.1"
                data_type => "list"
                key => "logstash"
                codec => json
        }
}

And this is the config in the second machine:

input
 {
    udp {
        type => "udp"
        port => 30000
    }
 redis {

 host => "HOST IP"
        data_type => "list"
        type => "redis"
        key => "logstash"
        codec => json
        }
}

filter {
   if [type] == "udp" {
            mutate {
                 rename => ["@host", "host"]
            }
            dns {
                 reverse => ["host"]
                 action =>  "replace"
                nameserver => "127.0.0.1"
                }

           grok {
                 patterns_dir => "/etc/logstash/patterns"
                 match => [
                         "message", "%{MESSAGE_1}",
                         "message", "%{MESSAGE_2}",
                         "message", "%{MESSAGE_3}"
                        ]
                  }
    }
}


output {
    elasticsearch
    {
        cluster => "logstash"
        host => "127.0.0.1"
        index => "logstash-syslog-%{+YYYY.MM.dd}"
        }
}

I am using a DNS cache, for that reason I have written nameserver as the localhost.
I am using a pattern file for the match.

DATER %{YEAR}[/]%{MONTHNUM}[/]%{MONTHDAY}[ ]%{TIME}
APPNAME [ ()a-zA-Z0-9._:-]+
SOURCEDATA [ ()a-zA-Z0-9._:-]+
LOGLEVEL (INFO|WARNING|ERROR)

MESSAGE_1 (?m)%{DATER:dater} \[%{APPNAME:app_name}\] \[%{LOGLEVEL:log_level}\] \<%{BASE10NUM:error_code}\> -> %{GREEDYDATA:Comment}
 
MESSAGE_2.... 
...

I think it's not a problem of synchronization, I have checked sending one message and clocking how much time it spends and it was the same result.

Is it a time issue, as Christian mentioned?

1 Like

Yes... the time was not synchronized... and that was the problem.
Thank you so much!
@Christian_Dahlqvist @warkolm You are great!
Have a nice day!