How to forward and index the json formatted log files in ELK-docker

Hi, I'm configuring the ELK-docker for the first time and this is new to me.
I'm using,

  • filebeat version 5.0
  • elasticsearch version 5.0
  • kibana version 5.0

I have filebeat configured on a different server and ELK-docker on a different server. The log files are stored on the same server the filebeat is configured. As a beginner I was trying to follow the shakespeare example provided here

My filebeat.yml looks like this.


- input_type: log
- /opt/ALLMODULESLOG/*.log
  hosts: [""]
 name: "shakespeare"
 path: "/etc/filebeat/filebeat.shakespeare.json"
 overwrite: true

I have the log file inside '/opt/ALLMODULESLOG/' downloaded from here and I renamed it to 'shakespeare.log' (I renamed it because my other log files are in .log extension, I assume this won't make any issues)

My filebeat.shakespeare.json looks like this

Also I added the mapping by following command

curl -XPUT http://localhost:9200/shakespeare -d '
 "mappings" : {
  "_default_" : {
   "properties" : {
    "speaker" : {"type": "string", "index" : "not_analyzed" },
    "play_name" : {"type": "string", "index" : "not_analyzed" },
    "line_id" : { "type" : "integer" },
    "speech_number" : { "type" : "integer" }

But when I added the index 'shakespeare' on kibana (Settings -> Indices) it shows the fields _source, _id, _type, _index, _score and not the speaker, play_name, line_id, speech_number fields. Please let me know if i'm missing anything? What is the configuration I need to do to forward and index the json formatted log files to elasticsearch?

To process json lines, you have to use the json config options:

Thank you @ruflin. I think after adding the following json config fields,

json.message_key: log
json.keys_under_root: true
json.add_error_key: true

I can see the index created in kibana under 'filebeat-2016.11.10' and not under 'shakespeare'. I saw the below logs been logged in elasticsearch, wondering this would help you why it was not created the index under 'shakespeare'.

[2016-11-10T12:53:47,377][INFO ][o.e.c.m.MetaDataCreateIndexService] [abmwpmt] [shakespeare] creating index, cause [api], templates [shakespeare], shards [5]/[1], mappings [_default_]
[2016-11-10T12:57:20,094][INFO ][o.e.c.m.MetaDataCreateIndexService] [abmwpmt] [filebeat-2016.11.10] creating index, cause [auto(bulk api)], templates [filebeat], shards [5]/[1], mappings [_default_]
[2016-11-10T12:57:20,228][INFO ][o.e.c.m.MetaDataMappingService] [abmwpmt] [filebeat-2016.11.10/pM4ln6sWR1GrizpZRD5gJQ] create_mapping [json]
[2016-11-10T12:57:20,252][INFO ][o.e.c.m.MetaDataMappingService] [abmwpmt] [filebeat-2016.11.10/pM4ln6sWR1GrizpZRD5gJQ] update_mapping [json]

Also after a while I've added few more lines to the 'shakespeare.log' file and searched through kibana by using 'filebeat-2016.11.10' mapping. But the search result didn't appear instantly (after few munites may be). Are there any configurations to search it as soon as the log file updated?

Please help to solve these issues!!

If you want to use a different index, you have to specific it in the elasticsearch output:

How fast a new line appears depends on various factors like backoff: If you manually update a file and only very rarely, it can take longer as it reached max back off. But normally with most logging system new lines come in very often which also makes sending it more often. Best have a look at the different config options like scan_frequency, backoff if this really becomes an issue.

Thank you @ruflin. Is that the default behavior? I mean when we create an index with the name 'shakespeare' and searched through Kibana, it has created with some default fields. But the 'filebeat-*' index is updated with the fields included in 'shakespeare'.

Not sure I get your question about the fields part. The default behaviour is to use the filebeat-* index.

Thank You @ruflin.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.