It is different from grok pattern, there is no issue about double quote, here is one piece of data from my kibana:
{
"_index": "zixun-nginx-access-2017.07.17",
"_type": "zixun-nginx-access",
"_id": "AV1P7cOGrY9BXe0OYOPM",
"_version": 1,
"_score": null,
"_source": {
"request": "GET /v1/top_news?uid=01a0351f072840c397f94ddc3960cd07 HTTP/1.0",
"referer": "-",
"offset": 92976439,
"input_type": "log",
"source": "/usr/local/nginx/logs/zixun.oupeng.com.access.log",
"type": "zixun-nginx-access",
"http_host": "zixun.oupeng.com",
"url": "/v1/top_news",
"http_user_agent": "-",
"tags": [
"beats_input_codec_json_applied"
],
"remote_user": "-",
"upstreamhost": "192.168.10.110:80",
"@timestamp": "2017-07-17T09:42:47.918Z",
"size": 623,
"clientip": "183.165.108.89",
"domain": "zixun.oupeng.com",
"host": "117.119.33.239",
"@version": "1",
"beat": {
"hostname": "uy05-12",
"name": "uy05-12",
"version": "5.5.0"
},
"responsetime": 0.006,
"xff": "-",
"upstreamtime": "0.006",
"status": "200"
},
"fields": {
"@timestamp": [
1500284567918
]
},
"sort": [
1500284567918
]
}
So you mean I still need to use filter plugin and date filter to deal with the logs?
But I have saved the time in log-format of nginx with corresponding field "@timestamp":"$time_iso8601"
, I think it should be read directly whenever, even if pipeline of logstash break for a while .