Hi Jon
Thank you for the reply.
Here is what I did. I deleted the index using the command:
DELETE /filebeat-*
***Update: I deleted the index initially in Kibana Management by clicking the dustin icon. The above command I use to clear out the index when testing.
I then disabled all modules in filebeat except for my custom one to avoid any noise. I wanted to rule out any potential issues with my custom module so what I did is a copied it to a new module name. Apart from the name everything else is the same.
I then generated some logs and in Kibana Management I was able to add a new Index pattern (I just typed in filebeat-* and Kibana was happy and created the index).
Then, I ran this command to query the log entries I just generated:
GET filebeat-*/_search?q=fileset.module:leigh&size=10
Then i got this reponse (error highlighted in bold):
{
"took": 22,
"timed_out": false,
"_shards": {
"total": 6,
"successful": 6,
"skipped": 0,
"failed": 0
},
"hits": {
"total": 4,
"max_score": 0.2876821,
"hits": [
{
"_index": "filebeat-6.2.4-2018.05.11",
"_type": "doc",
"_id": "BgzbT2MBllcb0uO0HxtK",
"_score": 0.2876821,
"_source": {
"offset": 7697,
"beat": {
"hostname": "mbd-datasciappd",
"name": "mbd-datasciappd",
"version": "6.2.4"
},
"prospector": {
"type": "log"
},
"read_timestamp": "2018-05-11T15:39:09.420Z",
"source": """E:\Python_Logs\django_logs.txt""",
"fileset": {
"module": "leigh",
"name": "thelogs"
},
"thelogs": {
"themessage": """Invalid format: "2018-05-11 17:39:05,923" is too short""",
"level": "WARNING",
"where": "engine_firebase.py\t",
"time": "2018-05-11 17:39:05,923"
}
}
},
So in my fields.yml I have this:
- key: leigh
title: "leigh"
description: >
leigh custom logs.
short_config: true
fields:
- name: thelogs
type: group
fields:
- name: level
type: keyword
description: >
the log level.
- name: where
type: keyword
description: >
in which file the error occurred.
- name: themessage
type: keyword
description: >
the actual error message.
In my default.json I have this (Date part highlighted in bold):
{
"description": "Pipeline for parsing custom thelogs logs.",
"processors": [{
"grok": {
"field": "message",
"patterns":[
"%{TIMESTAMP_ISO8601:thelogs.time} %{WORD:thelogs.level} WHERE: %{DATA:thelogs.where}MESSAGE: Problem is: %{GREEDYDATA:thelogs.themessage}"
],
"ignore_missing": true
}
},{
"remove":{
"field": "message"
}
}, {
"rename": {
"field": "@timestamp",
** "target_field": "read_timestamp"**
}
}, {
"date": {
"field": "thelogs.time",
"target_field": "@timestamp",
"formats": ["YYYY-MM-dd HH:mm:ss,SSS"]
}
}, {
"remove": {
"field": "thelogs.time"
}
}],
"on_failure" : [{
"set" : {
"field" : "thelogs.themessage",
"value" : "{{ _ingest.on_failure_message }}"
}
}]
}
Sp basically, thelogs.time shouldn't be in Kibana, it should be put into the @timestamp field but I guess because there is an error, it is not putting it in that field.
Also, thelog.time shows up in Kibana as a string, not as a date.
Here is the @timestamp portion from fields.yml:
- name: "@timestamp"
type: date
required: true
format: date
example: August 26th 2016, 12:35:53.332
description: >
The timestamp when the event log record was generated.
Here is a sample log entry:
2018-05-11 17:08:39,100 WARNING WHERE: engine_firebase.py MESSAGE: Problem is: [Errno 400 Client Error: Bad Request for url: abc.abc.abc
Thank you
Leigh