Issue in index mapping

Hi guys. I have some log in JSON format as below:

{"message":"new activity:","context":{"request.headers":{"x-user-data":["xxxx"],"x-real-ip":["xxxx"],"x-forwarded-via":["xxx/0.1"],"x-forwarded-proto":["http"],"x-forwarded-port":["80"],"x-forwarded-host":["xxxx"],"x-forwarded-for":["xxxx:45632"],"uber-trace-id":["xxxx:0"],"sec-fetch-site":["same-origin"],"sec-fetch-mode":["cors"],"sec-fetch-dest":["empty"],"sec-ch-ua-platform":["\"Windows\""],"sec-ch-ua-mobile":["?0"],"sec-ch-ua":["\" Not;A Brand\";v=\"99\", \"Google Chrome\";v=\"97\", \"Chromium\";v=\"97\""],"referer":["https://xxx/drivers/1234/rides"],"forwarded":["for=xxxx;host=xxxx;proto=http"],"content-type":["application/json"],"connection":["keep-alive"],"accept-language":["en-US,en;q=0.9"],"accept-encoding":["gzip, deflate, br"],"accept":["*/*"],"user-agent":["Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36"],"host":["fri:8080"],"content-length":[""]},"request.body":{"q":""},"request.ips":["x.x.x.x"],"request.ip":"x.x.x.x","request.url":"http://fri:8080/v1/reasons/options?q=","request.method":"GET","request.user":1790,"request.user.id":1790,"response.status":200,"response.headers":{"cache-control":["no-cache, private"],"date":["Wed, 02 Feb 2020 02:02:41 GMT"],"content-type":["application/json"]}},"level":200,"level_name":"INFO","channel":"activity","datetime":{"date":"2022-02-02 07:36:41.090852","timezone_type":3,"timezone":"Asia/Tehran"},"extra":[]}```

my work flow for shipping logs to Elasticsearch is as below:

Filebeat --> Fluentd --> Elasticsearch --> Kibana

My Filebeat configuration:

#=========================== Filebeat prospectors =============================
filebeat.prospectors:
- type: log
  encoding: utf-8
  fields:
    log_name: activity-logs
  fields_under_root: true
  document_type: log
  paths:
  - /home/activities-*.log

  exclude_files: ['\.gz$']
  ignore_older: 168h
  max_bytes: '1048576'
#------------------------------- Logstash output ----------------------------------
output.logstash:
  hosts: ["x.x.x.x:4004"]
  bulk_max_size: "2048"
  slow_start: true
  loadbalance: true
  worker: 2
  pipelining: 0

#================================ General =====================================
name: "filebeat-si"
logging.level: info
logging.selectors: ["*"]
filebeat.shutdown_timeout: 30s

and my Fluentd configuration is:

<source>
 @type beats
 port 5308
 bind 0.0.0.0
 tag fri
</source>
<filter fri>
 @type parser
 key_name message
 emit_invalid_record_to_error true
 reserve_data true
 reserve_time true
 remove_key_name_field true
 inject_key_prefix json.
 replace_invalid_sequence true
 <parse>
  @type json
 </parse>
</filter>
<match fri>
   @type elasticsearch
    hosts x.x.x.x:9200
    user elastic
    password xxxx
    index_name ${tag}-%Y.%m
    <buffer tag, time>
     @type memory
     timekey 1h
     flush_interval 5s
     flush_mode interval
     flush_thread_count 4
     total_limit_size 5G
    </buffer>
 </match>

Everything is fine and ships to Elasticsearch and seen in Kibana when I don't use any filter plugin in Fluentd. But when I use the JSON plugin in Fluentd as I mentioned upper in Fluentd config, My index mapping is gone and nothing shows in Kibana.

image

What is the problem exactly? Why does my mapping disappear suddenly? Is anyone there who can help me in this situation?

Hi guys, I resolved this problem by using static mapping, But I don't know why in this case dynamic mapping didn't work correctly

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.