Logstash/Kibana: Filter AWS ELB logs based on the old timestamp

Recently I set up the ELK stack to filter logs of my Load Balancer. But the timestamp which is getting showed up in Kibana is not from the log files, but is the one at which the file was downloaded to the server. I am not sure, what mistake I am making.

We are using AWS and our server logs are stored on the S3. Below is the content from the log file:

2015-10-30T13:49:31.936560Z <Load-Balanacer_name> <IP.ADDRESS>:<PORT> <SERVER.IP>:<PORT> 0.000713 0.000022 0.000013 - - 20643 439 "- - - " "-" - 

As per the AWS doc: Monitor Your Load Balancer Using Elastic Load Balancing Access Logs
The logging format is:

timestamp elb client:port backend:port request_processing_time backend_processing_time response_processing_time elb_status_code backend_status_code received_bytes sent_bytes "request" "user_agent" ssl_cipher ssl_protocol

where timestamp is in ISO 8601 format.

Below is the content of my logstash.conf file:

input {
    file {
        path => "/home/ubuntu/data_dir/**/*.log"
        type => "elb"
        start_position => "beginning"
        sincedb_path => "log_sincedb"
    }
}
filter {
    if [type] == "elb" {
        grok {
            match => [ "message", "%{TIMESTAMP_ISO8601:timestamp} %{NOTSPACE:loadbalancer} %{IP:client_ip}:%{NUMBER:client_port:int} %{IP:backend_ip}:%{NUMBER:backend_port:int} %{NUMBER:request_processing_time:float} %{NUMBER:backend_processing_time:float} %{NUMBER:response_processing_time:float} %{NUMBER:elb_status_code:int} %{NUMBER:backend_status_code:int} %{NUMBER:received_bytes:int} %{NUMBER:sent_bytes:int} %{QS:request}" ]
        }
        date {
            match => [ "timestamp", "ISO8601" ]
        }
    }
}
output {
    elasticsearch { embedded => true }
}

Your configuration looks correct and I'm not able to reproduce what you describe:

$ cat test.config
input { stdin { } }
output { stdout { codec => rubydebug } }
filter {
  date {
    match => ["message", "ISO8601"]
  }
}
$ echo '2015-10-30T13:49:31.936560Z' | /opt/logstash/bin/logstash -f test.config
Logstash startup completed
{
       "message" => "2015-10-30T13:49:31.936560Z",
      "@version" => "1",
    "@timestamp" => "2015-10-30T13:49:31.936Z",
          "host" => "hallonet"
}
Logstash shutdown completed

Have you looked in your logs to see if Logstash is complaining about not being able to parse the timestamp?

(Note that the embedded ES mode is strongly discouraged and is completely removed as of Logstash 2.0.)

The logs that I am having are the historic logs. As per my understanding, Logstash or Kibana should not be caring for the age of the logs, and are expected to filter data based on timestamp mentioned with the log. But that is not the expected behavior in my case. What can be the possible reason for this?

If you answer my question about the contents of the logs maybe we can move forward with the debugging of this matter.

Below is the chuck of ouput from the verbose mode. Is it of any help?

output received {:event=>#<LogStash::Event:0x68133151 @cancelled=false, @data={"message"=>"2015-08-10T10:58:11.250293Z <LOAD-Balanacer-Name> <IP>:<PORT> <IP>:<PORT> 0.017633 0.000013 0.000028 - - 266 41 \"- - - \" \"-\" - -", "@version"=>"1", "@timestamp"=>"2015-10-30T20:43:58.107Z", "type"=>"elb", "host"=>"ip-<IP>", "path"=>"/home/ubuntu/data_dir/AWSLogs/299594820733/elasticloadbalancing/us-west-2/2015/08/10/299594820733_elasticloadbalancing_us-west-2_External-LB-Gateway_20150810T1100Z_IP76ap8y.log", "tags"=>["_grokparsefailure"]}>, :level=>:info}

Command which I am using to run logstash in debug mode is:

java -jar logstash-1.3.3-flatjar.jar agent -f logstash.conf -- web --log /tmp/logstash-shipper.log -w 1 -v

But it is crashing with error. I think omething is wrong with the debug param I am passing.

Using milestone 2 input plugin 'file'. This plugin should be stable, but if you see strange behavior, please let us know! For more information on plugin milestones, see http://logstash.net/docs/1.3.3/plugin-milestones {:level=>:warn}
Exception in thread "LogStash::Runner" org.jruby.exceptions.RaiseException: (InvalidOption) invalid option: --log
	at RUBY.complete(jar:file:/home/ubuntu/logstash-1.3.3-flatjar.jar!/META-INF/jruby.home/lib/ruby/1.9/optparse.rb:1542)
	at org.jruby.RubyKernel.catch(org/jruby/RubyKernel.java:1284)
	at RUBY.complete(jar:file:/home/ubuntu/logstash-1.3.3-flatjar.jar!/META-INF/jruby.home/lib/ruby/1.9/optparse.rb:1540)
	at RUBY.parse_in_order(jar:file:/home/ubuntu/logstash-1.3.3-flatjar.jar!/META-INF/jruby.home/lib/ruby/1.9/optparse.rb:1354)
	at org.jruby.RubyKernel.catch(org/jruby/RubyKernel.java:1284)
	at RUBY.parse_in_order(jar:file:/home/ubuntu/logstash-1.3.3-flatjar.jar!/META-INF/jruby.home/lib/ruby/1.9/optparse.rb:1347)
	at RUBY.order!(jar:file:/home/ubuntu/logstash-1.3.3-flatjar.jar!/META-INF/jruby.home/lib/ruby/1.9/optparse.rb:1341)
	at RUBY.permute!(jar:file:/home/ubuntu/logstash-1.3.3-flatjar.jar!/META-INF/jruby.home/lib/ruby/1.9/optparse.rb:1432)
	at RUBY.parse!(jar:file:/home/ubuntu/logstash-1.3.3-flatjar.jar!/META-INF/jruby.home/lib/ruby/1.9/optparse.rb:1453)
	at RUBY.parse(jar:file:/home/ubuntu/logstash-1.3.3-flatjar.jar!/META-INF/jruby.home/lib/ruby/1.9/optparse.rb:1443)
	at RUBY.run(file:/home/ubuntu/logstash-1.3.3-flatjar.jar!/logstash/kibana.rb:83)
	at logstash.runner.run(logstash/runner.rb:113)
	at org.jruby.RubyProc.call(org/jruby/RubyProc.java:271)
	at logstash.runner.run(logstash/runner.rb:207)
	at logstash.runner.main(logstash/runner.rb:79)
	at logstash.runner.(root)(logstash/runner.rb:237)

Can you help me with the parameter to pass to generate log file?

http://justpaste.it/oofr

output received {:event=>#<LogStash::Event:0x68133151 @cancelled=false, @data={"message"=>"2015-08-10T10:58:11.250293Z <LOAD-Balanacer-Name> <IP>:<PORT> <IP>:<PORT> 0.017633 0.000013 0.000028 - - 266 41 \"- - - \" \"-\" - -", "@version"=>"1", "@timestamp"=>"2015-10-30T20:43:58.107Z", "type"=>"elb", "host"=>"ip-<IP>", "path"=>"/home/ubuntu/data_dir/AWSLogs/299594820733/elasticloadbalancing/us-west-2/2015/08/10/299594820733_elasticloadbalancing_us-west-2_External-LB-Gateway_20150810T1100Z_IP76ap8y.log", "tags"=>["_grokparsefailure"]}>, :level=>:info}

Here we see that the _grokparsefailure tag is set, which indicates that your grok expression doesn't match the input string. If grok isn't successful the timestamp field isn't populated and then the date filter won't populate @timestamp either. So, fix your grok expression. To save time, don't use an elasticsearch output until you've verified that you get the desired output. Use stdout { codec => rubydebug } instead.

1 Like

Sorry for bothering you again. But I am still unable to resolve the issue. For line:

2015-08-22T20:59:30.322751Z External-LB-Gateway 106.216.129.73:25217 10.131.6.233:9088 0.01728 0.000015 0.000023 - - 266 41 "- - - " "-" - -

Is the below grok expression incorrect?

%{TIMESTAMP_ISO8601:timestamp} %{NOTSPACE:loadbalancer} %{IP:client_ip}:%{NUMBER:client_port:int} %{IP:backend_ip}:%{NUMBER:backend_port:int} %{NUMBER:request_processing_time:float} %{NUMBER:backend_processing_time:float} %{NUMBER:response_processing_time:float} %{NUMBER:elb_status_code:int} %{NUMBER:backend_status_code:int} %{NUMBER:received_bytes:int} %{NUMBER:sent_bytes:int} %{QS:request}

Yes. It looks okay up to here:

- - 266 41 "- - - " "-" - -
%{NUMBER:elb_status_code:int} %{NUMBER:backend_status_code:int} %{NUMBER:received_bytes:int} %{NUMBER:sent_bytes:int} %{QS:request}

Clearly, a hyphen isn't a number. You can look at the patterns that ship with Logstash to see how you can match a hyphen or something else. Have a look at the Apache patterns.