Json format and types


I've been setting up filebeat to send json formatted logs over to logstash before storing in ES. I'm not sure if my issue is related to filebeat or logstash.

Once the JSON formated files have made there was to ES, The "type" for my fields in Kibana is set to text but I need to change some of them to be interger, IP & times.

My setup:
filebeat version 6.1.3 (amd64), libbeat 6.1.3
Filebeat setup based on this.

- paths:
  - '/tmp/my_logs.log'
  tags: ['json']

- decode_json_fields:
    when.regexp.message: '^{'
    fields: ["message"]
    target: ""
    overwrite_keys: true

  hosts: ["server.example.com:5044"]

Logstash Filter:
version: logstash 6.1.1

filter {
  if [tags][json] {
  json {
    source => "message"

JSON example:

{ "@timestamp_date": "2018-07-11T08:54:50+0100", "@tenant": "rpr_pnlref_aps", "@type": "cloud_web-app_access-logs", "@level": "daily", "remote_addr_ip": "", "request_time_d": "0", "status": "500", "request": "/", "urlpath": "/", "urlquery": "", "body_bytes_sent_l": "5135", "request_method": "GET", "http_referrer": "-", "http_user_agent": "-", "message": "Message", "EXAMPLEPAUTHLEVEL": "-", "EXAMPLEPCLIENTIP_ip": "-", "EXAMPLEPEMAILADDRESS": "-", "EXAMPLEPGLOBALID": "-", "EXAMPLEPFIRSTNAME": "-", "EXAMPLEPLASTNAME": "-", "EXAMPLEPFULLNAME": "- -" }

Kibana Version: 6.1.3

So it appears to parse correctly when viewed in Kibana but the "types" aren't correct. For example:
"body_bytes_sent_l": "5135" should be an integer but it's a string. I won't be able to use this data in visualizations.

This is just one type that isn't correct and i suspect if I resolve this issue it will fix all the types. Can i mutate the field or have i set filebeat up incorrectly? I hope someone can point me in the right direction. Please let me know if you need anymore information.

Thanks -

The field types are 100% determined by Elasticsearch and the mapping defined for the index. When an index is created the mapping is determined based on the index templates. You can create your own index template that defines the mappings for you fields.

Filebeat provides its own index template for filebeat-* indices and automatically installs it to Elasticsearch when you use the ES output (not LS). See https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-template.html

Logstash also provides a generic index template that applies to logstash-* indices.

Neither Filebeat nor Logstash know anything about your custom fields that you parse out of the data so you must add those fields to the index template to customize the data type.

I recommend using the Filebeat template as your starting point. Then customize it with your fields. You can use filebeat export template to get a copy of the template and edit it. Here's more info about index templates from the Elasticsearch docs.

Thanks @andrewkroh.

Once I've worked it out I'll try and post back what I did.
This link is helpful Here:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.