Logs seems to be fine, yet date filter is not working. If i remove the date filter, it takes the timestamp as string, so grok pattern is fine.
Input format:
2017-04-03 11:40:59
2017-04-04 12:40:59
2017-04-05 13:40:59
Config file:
filter {
if [message] =~ "^#" {
drop {}
}
else {
grok {
match => {"message" => "%{TIMESTAMP_ISO8601:log_timestamp}" }
}
date {
match => ["log_timestamp","YYYY-MM-dd HH:mm:ss"]
timezone => "Etc/UTC"
}
}
}
Logstash Logs:
[2017-05-25T10:55:42,070][WARN ][logstash.runner ] SIGTERM received. Shutting down the agent.
[2017-05-25T10:55:42,081][WARN ][logstash.agent ] stopping pipeline {:id=>"main"}
[2017-05-25T10:55:47,116][WARN ][logstash.shutdownwatcher ] {"inflight_count"=>0, "stalling_thread_info"=>{"other"=>[{"thread_id"=>28, "name"=>"[main]<beats", "current_call"=>"[...]/vendor/bundle/jruby/1.9/gems/logstash-input-beats-3.1.12-java/lib/logstash/inputs/beats.rb:213:in run'"}, {"thread_id"=>25, "name"=>"[main]>worker2", "current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_synchronous_queue.rb:136:in
synchronize'"}], ["LogStash::Filters::Mutate", {"remove_field"=>["log_timestamp"], "id"=>"1980d2e3a0a9ae26700c4a8ac71d34c4e8e536b8-8"}]=>[{"thread_id"=>23, "name"=>"[main]>worker0", "current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_synchronous_queue.rb:136:in `synchronize'"}, {"thread_id"=>24, "name"=>"[main]>worker1", "current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_synchronous_queue.rb:118:in `synchronize'"}, {"thread_id"=>26, "name"=>"[main]>worker3", "current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_synchronous_queue.rb:118:in `synchronize'"}]}}
[2017-05-25T10:55:47,121][ERROR][logstash.shutdownwatcher ] The shutdown process appears to be stalled due to busy or blocked plugins. Check the logs for more information.
[2017-05-25T10:56:03,360][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>, :added=>[http://localhost:9200/]}}
[2017-05-25T10:56:03,365][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2017-05-25T10:56:03,507][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x4bca867 URL:http://localhost:9200/>}
[2017-05-25T10:56:03,510][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::Elasticsearch", :hosts=>[#<URI::Generic:0x4caeb5ce URL://localhost:9200>]}
[2017-05-25T10:56:03,610][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
[2017-05-25T10:56:04,278][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2017-05-25T10:56:04,327][INFO ][logstash.pipeline ] Pipeline main started
[2017-05-25T10:56:04,382][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2017-05-25T10:56:08,598][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[http://localhost:9200/], :added=>[http://127.0.0.1:9200/]}}
[2017-05-25T10:56:08,598][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://127.0.0.1:9200/, :path=>"/"}
[2017-05-25T10:56:08,604][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x77264f91 URL:http://127.0.0.1:9200/>}