Elasticsearch explicit field mappings in template not taken when data is logged from filebeat through ingest pipeline

sir,
I have created an index life cycle policy with name

lp_ilp_fgnew1

Created index template lp_templateforotherfgutmwaf7 as follows

PUT _index_template/lp_templateforotherfgutmwaf7
{
  "index_patterns": ["lp_index-otherfgutmwaf7-*"],                 
  "template": {
    "settings": {
      "number_of_shards": 1,
      "number_of_replicas": 1,
      "index.lifecycle.name": "lp_ilp_fgnew1",      
      "index.lifecycle.rollover_alias": "lp_index-otherfgutmwaf7"    
    }
  }
}  

Bootstrapped initial indices for lp_index-otherfgutmwaf as follows
I have tried to explicitly map some of the fields also in the mappings section here

PUT /%3Clp_index-otherfgutmwaf7-%7Bnow%2Fd%7D-000001%3E
{
"aliases": {
    "lp_index-otherfgutmwaf7": {
      "is_write_index": true
    }
  },
  "mappings" : {
      "properties" : {
        "action" : {
              "type" : "keyword"
        },
        "direction" : {
          "type" : "text"
        },
        "dstintf" : {
          "type" : "integer"
        }
      },
      "dynamic": true
    }
}

Created ingest pipe line lp_index-otherfgutmwaf7-pipeline as follows

PUT _ingest/pipeline/lp_index-otherfgutmwaf7-pipeline
{
  "description": "Ingest pipeline",
  "processors": [
    {
      "dissect": {
        "field": "message",
        "pattern" : "%{timestamp} _gateway %{kvmsg}"
      }
    },
    {
      "kv": {
    "field": "kvmsg",
    "field_split": " (?=[a-z\\_\\-]+=)",
    "value_split": "=",
    "trim_value": "\""
  }
    },
    {
      "date": {
        "field": "timestamp",
        "formats": [
          "ISO8601"
        ],
        "output_format": "yyyy-MM-dd'T'HH:mm:ss.SSSSSSSSSXXX"
      }
    },
    {
      "remove": {
        "field": "timestamp",
        "ignore_missing": true
      }
    },
    {
      "remove": {
        "field": "message",
        "ignore_missing": true
      }
    },  
    {
      "remove": {
        "field": "kvmsg",
        "ignore_missing": true
      }
    },
    {
      "remove": {
        "field": "devname",
        "ignore_missing": true
      }
    },
    {
      "remove": {
        "field": "devid",
        "ignore_missing": true
      }
    },
    {
      "remove": {
        "field": "type",
        "ignore_missing": true
      }
    }
  ]
}

Log data is populated from filebeat to the index through the pipeline.

Log data population to the index is also tested using direct post command as follows

{
  "action": "block",
  "direction": "output",
  "testfield": "Testing",
  "dstintf": "6"
}

Log data from file gets loaded in index from file beat through pipeline only when the explicit mappings section in index template is not there. . Using direct post command also works.

When explicit mappings section in index template is there even when file data DOES NOT gets loaded in index from file beat through pipeline , it gets loaded
using direct post command.

Why is it so ?
How to load Log data from file in index from file beat through pipeline when the explicit mappings section in index template is there?

thanks and regards
shini
How to

Please share your filebeat.yml

Sir,
It is now understood that from filebeat though pipeline the issue was with

 "dstintf" : {
          "type" : "integer"
        }

It was my mistake ,
actual value of dstintf from log file is port6 and not 6 which was used while successfully testing using the POST command.
So filebeat was throwing integer parsing error or this field and it failed. I could see all these error messaages in filebeat clearly after I commented out the modules section in filebeat which was also generating lots of logs.

The issue is solved
Thanks and regards,
shini

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.