I had an event where the indices were set to read-only due to diskspace. After increasing space and setting them for read/write, the grok for syslog-auth appears to be parsing the system.auth.user wrong or somehow the index was created with an object type instead of string. I validated that the grok was exactly the same as specified in https://www.elastic.co/guide/en/logstash/current/logstash-config-for-filebeat-modules.html#parsing-system and I can see that the system.auth.user is a single string and in the entries from the day before. I see multiple errors such as:
[2018-08-23T17:41:55,158][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"filebeat-6.3.2-2018.08.23", :_type=>"doc", :_routing=>nil}, #LogStash::Event:0x4f6ae416], :response=>{"index"=>{"_index"=>"filebeat-6.3.2-2018.08.23", "_type"=>"doc", "_id"=>"-1q8aGUB2ciOGo1Tme43", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"object mapping for [system.auth.user] tried to parse field [user] as object, but found a concrete value"}}}}
grokdebugger.herokuapp.com shows this result using sample data
{ ...
"[system][auth][user]": [
[
"git"
]
],
... }
stdout { codec => rubydebug } shows this for same sample data:
{...
"system" => {
"auth" => {
"user" => "git",
... }
I'm suspecting that somehow my index got corrupted and it might be corrected with the next day's index creation, but I don't know how that would have happened.