Hello Everyone,
I have been parsing TACACS logs for a few months now without error and all of a sudden, on 2/1/2020, the logs stopped appearing in Kibana. I mention this because my current index name is logstash-2020.02.01-000004 and the last TACACS log in Kibana is Feb 1st. I have two pipelines writing to the same index, but they are both using the same date date parsers and the other pipeline is still working perfectly (other pipeline is parsing syslog messages).
This led me to look into my Logstash logs and I found the following:
[WARN ][logstash.outputs.elasticsearch][main] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash", :_type=>"_doc", :routing=>nil}, #<LogStash::Event:0x5ebdddeb>], :response=>{"index"=>{"_index"=>"logstash-2020.02.01-000004", "_type"=>"_doc", "_id"=>"_xZiZHABI1TdtHTEPPdJ", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [timestamp] of type [date] in document with id '_xZiZHABI1TdtHTEPPdJ'. Preview of field's value: '2020-02-20 14:57:20 -0600'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"failed to parse date field [2020-02-20 14:57:20 -0600] with format [strict_date_optional_time||epoch_millis]", "caused_by"=>{"type"=>"date_time_parse_exception", "reason"=>"Failed to parse with all enclosed parsers"}}}}}}
This log seemed to be counter-intuitive to me as the date format for my logs has not changed with my TACACS logs, literally ever. So my next thought was to review my pipeline configuration which, again, has not been changed in a while. Here is my pipeline configuration with the IP changed for privacy.
input {
beats {
# Listen on port 5040 for Accouting Logs from TACACS Servers
port => "5040"
}
}
filter {
grok {
# TACACS Command Logs - Cisco and Ruckus/Brocade Switches
match => {"message" => "%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{IP:remote_device}%{SPACE}%{USERNAME:user}%{SPACE}%{WORD:conn_type}%{SPACE}%{IP:connected_via}%{SPACE}.*cmd=%{DATA:command}\<cr\>" }
# TACACS Command Logs - HP/Aruba Switches
match => {"message" => "%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{IP:remote_device}%{SPACE}%{USERNAME:user}%{SPACE}%{IP:connected_via}%{SPACE}.*cmd=%{DATA:command}\Z" }
# TACACS Authentication - Success and Fails
match => {"message" => "%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{IP:remote_device}%{SPACE}%{USERNAME:user}%{SPACE}%{WORD:conn_type}%{SPACE}%{IP:connected_via}%{SPACE}.*login %{DATA:response}\Z" }
}
date {
match => ["timestamp", "YYYY-MM-dd HH:mm:ss Z", "ISO8601"]
}
if "_grokparsefailure" in [tags] {
drop { }
}
}
output {
elasticsearch {
hosts => [ "1.1.1.1:9200" ]
}
}
Here are also some example logs being parsed, which seem to be clearly following the defined format.
2020-02-20 15:03:25 -0600 1.1.1.1 username tty16 2.2.2.2 stop task_id=1 timezone=GMT+00 service=shell priv-lvl=0 cmd=exit <cr>
2020-02-20 15:03:30 -0600 1.1.1.1 username tty16 2.2.2.2 stop task_id=1 timezone=GMT+00 service=shell priv-lvl=0 cmd=vlan 20 <cr>
2020-02-20 15:03:45 -0600 1.1.1.1 username tty16 2.2.2.2 stop task_id=1 timezone=GMT+00 service=shell priv-lvl=0 cmd=tagged ethernet 1/1/2 to 1/1/48 ethernet 2/1/2 to 2/1/48 <cr>
2020-02-20 15:03:53 -0600 1.1.1.1 username tty16 2.2.2.2 stop task_id=1 timezone=GMT+00 service=shell priv-lvl=0 cmd=exit <cr>
2020-02-20 15:03:55 -0600 1.1.1.1 username tty16 2.2.2.2 stop task_id=1 timezone=GMT+00 service=shell priv-lvl=0 cmd=exit <cr>
2020-02-20 15:03:58 -0600 1.1.1.1 username tty16 2.2.2.2 stop task_id=1 timezone=GMT+00 service=shell priv-lvl=0 cmd=show vlan <cr>
Does anyone have any ideas as to what would cause this and how I can correct it?