About a day ago by now, my elasticsearch stack stopped accepting parsed nginx logs.
Logstash and Elasticsearch are 6.5.4.
Here are the filters for these logs:
grok {
match => {"message" => "%{IP:remote_ip}?,?\s?%{IP:proxy_ip}?,?\s?%{IP:extra_ip}?,?\s?%{COMBINEDAPACHELOG} %{NUMBER: nginx.request_time} (-|%{NUMBER: nginx.response_time}) %{DATA:pipe} %{HOSTNAME:host} %{DATA:cookie_tsid} %{GREEDYDATA:cookie_tid}"}
}
geoip{
source => "remote_ip"
}
Here's a log that got parsed and indexed into Elasticsearch yesterday:
192.168.243.72, 52.4.133.57, 52.4.133.57 - - [29/Jan/2019:15:59:59 -0800] "POST /application/a68a9e0e-5835-46b3-a921-0710bf4d90a6/gq HTTP/1.1" 200 173 "https://application.mycompany.com/application/a68a9e0e-5835-46b3-a921-0710bf4d90a6" "Mozilla/5.0 (iPhone; CPU iPhone OS 12_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) GSA/65.0.225212226 Mobile/15E148 Safari/605.1" 0.052 0.052 . application.mycompany.com x_09a03a71-a0a7-4a08-a501-c00d24cf0bc3 x_bacb6ed3-bf9f-4208-9ab7-b4b2cfc9996e
Here's a log from today that got parsed, but rejected by Elasticsearch:
192.168.2.57, 52.4.133.57, 52.4.133.57 - - [30/Jan/2019:13:16:49 -0800] "GET /assets/app/modules/mycompany.application.ChildFrame-a9dcfcb288468a4b834c11f8a7cf043119ec8c18887ee960a91f35978ec27491.js HTTP/1.1" 200 17577 "https://application.mycompany.com/application/67a3f430-411f-467a-b3da-11da6490209a" "Mozilla/5.0 (X11; CrOS x86_64 10176.76.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.190 Safari/537.36" 0.000 - . application.mycompany.com x_0ae995b8-edec-4733-b699-7f96e4d39309 x_55bad359-845e-4643-86f0-4ee5eec2134c
Resulting in the following error in logstash-plain.log:
Jan 30 13:57:43 logstash-1 logstash[22214]: [2019-01-30T13:16:50,666][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"index_nginx_2019.01.30", :_type=>"doc", :routing=>nil}, #<LogStash::Event:0x7ea8a465>], :response=>{"index"=>{"_index"=>"index_nginx_2019.01.30", "_type"=>"doc", "_id"=>"Tq_EoGgB0wNIr0bWkq1e", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [timestamp] of type [date]", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"Invalid format: \"30/Jan/2019:13:16:49 -0800\" is malformed at \"Jan/2019:13:16:49 -0800\""}}}}}
I am looking really hard at this and trying to figure out what the difference between the two logs is and why one would get parsed and accepted and the other rejected. Anyone have any ideas what could be causing this?