Filebeast working, Elastic Working, but data not in Kibana

Hi guys

I hope you can help me out as I've been banging my head against this for the past few hours now.

I installed the following:
filebeat version 6.2.4 (amd64), libbeat 6.2.4
elasticsearch-6.2.4.msi
kibana-6.2.4-windows-x86_64.zip

All running on a Windows Server 2012 R2 standard.

I enabled the apache2 filebeat which writes straight to elastic, not to logstash.
I then created a custom log reader for my python logs which works great. It reads the files and sends everything to elasticsearch. I can also see the custom log data in Kibana.

But, here is where I am lost... The first time the filebeat services started up (after adding the new custom log file) it read all the log entries, sent it to elastic, and it shows up in Kibana. Happy days! But, then, after it initially read the custom log file and sent all the entries, no more data shows up in kibana even though elastic says it is there:
GET filebeat-*/_search?q=fileset.module:tc_python&size=0 gives me the following output:
{
"took": 2,
"timed_out": false,
"_shards": {
"total": 3,
"successful": 3,
"skipped": 0,
"failed": 0
},
"hits": {
"total": 3,
"max_score": 0,
"hits": []
}
}

which is 100% correct. I only created three additional log entries. Also, I deleted the index, and sent 3 more log entries, but still, it doesnt show up in Kibana.

DELETE filebeat-* -- the command I used to delete the indexes.

My filebeat gives no errors, and my elastic also doesnt give any errors in the log. So I do now know what to do anymore....

I will really appreciate your help!

Regards

necrolingus

OK, i found an error in the output. Here is the error and my custom log file config:

ERROR - that I see in Kibana. There is also no @timestamp field present.
"error": {
"message": "field [python] not present as part of path [python.time]"

My GROK JSON file:
{
"description": "Pipeline for parsing custom tc_python logs.",
"processors": [{
"grok": {
"field": "message",
"patterns":[
"%{DATESTAMP:tc_python.time} %{WORD:tc_python.level} WHERE: %{DATA:tc_python.where}MESSAGE: Problem is: %{GREEDYDATA:tc_python.message}"
],
"ignore_missing": true
}
},{
"remove":{
"field": "message"
}
}, {
"rename": {
"field": "@timestamp",
"target_field": "read_timestamp"
}
}, {
"date": {
"field": "tc_python.time",
"target_field": "@timestamp",
"formats": ["dd/MMM/YYYY:H:m:s Z"]
}
}, {
"remove": {
"field": "tc_python.time"
}
}],
"on_failure" : [{
"set" : {
"field" : "tc_python.message",
"value" : "{{ _ingest.on_failure_message }}"
}
}]
}

Fields.yml: -- I didnt put tc_python.time in here because in my JSON file I pass it to @timestamp

  • key: tc_python
    title: tc_python
    description: >
    tc_python custom python logs.
    short_config: false
    fields:
    • name: tc_python
      type: group
      fields:
      • name: level
        type: keyword
        description: >
        the log level.
      • name: where
        type: keyword
        description: >
        in which file the error occurred.
      • name: message
        type: keyword
        description: >
        the actual error message.

Can you share what the fields in a final, transformed document look like? It might be worth double checking the time picker in the top right and make sure it's at a range that covers all the documents too.

If at any point the field names have changed it may be worth re-loading the index pattern in Kibana too (deleting and then re-adding).

Hi Jon

Thank you for the reply.

Here is what I did. I deleted the index using the command:
DELETE /filebeat-*
***Update: I deleted the index initially in Kibana Management by clicking the dustin icon. The above command I use to clear out the index when testing.

I then disabled all modules in filebeat except for my custom one to avoid any noise. I wanted to rule out any potential issues with my custom module so what I did is a copied it to a new module name. Apart from the name everything else is the same.

I then generated some logs and in Kibana Management I was able to add a new Index pattern (I just typed in filebeat-* and Kibana was happy and created the index).

Then, I ran this command to query the log entries I just generated:
GET filebeat-*/_search?q=fileset.module:leigh&size=10

Then i got this reponse (error highlighted in bold):
{
"took": 22,
"timed_out": false,
"_shards": {
"total": 6,
"successful": 6,
"skipped": 0,
"failed": 0
},
"hits": {
"total": 4,
"max_score": 0.2876821,
"hits": [
{
"_index": "filebeat-6.2.4-2018.05.11",
"_type": "doc",
"_id": "BgzbT2MBllcb0uO0HxtK",
"_score": 0.2876821,
"_source": {
"offset": 7697,
"beat": {
"hostname": "mbd-datasciappd",
"name": "mbd-datasciappd",
"version": "6.2.4"
},
"prospector": {
"type": "log"
},
"read_timestamp": "2018-05-11T15:39:09.420Z",
"source": """E:\Python_Logs\django_logs.txt""",
"fileset": {
"module": "leigh",
"name": "thelogs"
},
"thelogs": {
"themessage": """Invalid format: "2018-05-11 17:39:05,923" is too short""",
"level": "WARNING",
"where": "engine_firebase.py\t",
"time": "2018-05-11 17:39:05,923"
}
}
},

So in my fields.yml I have this:

  • key: leigh
    title: "leigh"
    description: >
    leigh custom logs.
    short_config: true
    fields:
    • name: thelogs
      type: group
      fields:
      • name: level
        type: keyword
        description: >
        the log level.
      • name: where
        type: keyword
        description: >
        in which file the error occurred.
      • name: themessage
        type: keyword
        description: >
        the actual error message.

In my default.json I have this (Date part highlighted in bold):
{
"description": "Pipeline for parsing custom thelogs logs.",
"processors": [{
"grok": {
"field": "message",
"patterns":[
"%{TIMESTAMP_ISO8601:thelogs.time} %{WORD:thelogs.level} WHERE: %{DATA:thelogs.where}MESSAGE: Problem is: %{GREEDYDATA:thelogs.themessage}"
],
"ignore_missing": true
}
},{
"remove":{
"field": "message"
}
}, {
"rename": {
"field": "@timestamp",
** "target_field": "read_timestamp"**
}
}, {
"date": {
"field": "thelogs.time",
"target_field": "@timestamp",
"formats": ["YYYY-MM-dd HH:mm:ss,SSS"]
}
}, {
"remove": {
"field": "thelogs.time"
}
}],
"on_failure" : [{
"set" : {
"field" : "thelogs.themessage",
"value" : "{{ _ingest.on_failure_message }}"
}
}]
}

Sp basically, thelogs.time shouldn't be in Kibana, it should be put into the @timestamp field but I guess because there is an error, it is not putting it in that field.
Also, thelog.time shows up in Kibana as a string, not as a date.

Here is the @timestamp portion from fields.yml:

  • name: "@timestamp"
    type: date
    required: true
    format: date
    example: August 26th 2016, 12:35:53.332
    description: >
    The timestamp when the event log record was generated.

Here is a sample log entry:
2018-05-11 17:08:39,100 WARNING WHERE: engine_firebase.py MESSAGE: Problem is: [Errno 400 Client Error: Bad Request for url: abc.abc.abc

Thank you

Leigh

Hi Jon

I got it working.

I recreated my module, and did the following:
I made my datetime field as DATE, not the ISO datetime.
I changed to this: (my ingest on failure field was leigh.thelogs.message) and that is I think where the biggest problem came in)
"on_failure" : [{
"set" : {
"field" : "error.message",
"value" : "{{ _ingest.on_failure_message }}"
}
}

I kept my date pattern like the below. I am OK with this format.
"date": {
"field": "p_logs.entries.logtime",
"target_field": "@timestamp",
"formats": ["dd/MMM/YYYY:H:m:s Z"]
}

What I did was the following:
From the filebeat website run the following in Powershell:
Invoke-RestMethod -Method Delete "http://192.168.40.147:9200/filebeat-*"
.\filebeat.exe export template --es.version 6.2.4 | Out-File -Encoding UTF8 filebeat.template.json
Invoke-RestMethod -Method Put -ContentType "application/json" -InFile filebeat.template.json -Uri http:/
/192.168.40.147:9200/_template/filebeat-6.2.4

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.