Ingest log in specific index

Hi,

I'm using ELK version 7.16.2 and I have configured Filebeat and I want to ingest log on dedicated index rather than on default filebeat index. Please help me on this.
Please find the filebeat.yml file configuration below.

###################### Filebeat Configuration Example #########################
# ============================== Filebeat inputs ===============================

filebeat.inputs:
- type: filestream
  enabled: true
  paths:
    #- /var/log/*.log
    - D:\Elastic\kibana-7.16.2-windows-x86_64\Audit\*.log
  max_bytes: 104857600000

# ============================== Filebeat modules ==============================

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1
# ================================= Dashboards =================================
setup.dashboards.enabled: true
# ================================== Template ==================================
setup.template.name: "test"
setup.template.pattern: "test-*"
setup.template.settings:
# =================================== Kibana ===================================

setup.kibana:
  host: "http://********:5601"
  protocol: "http"
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["http://*******:9200/"]
  username: "*********"
  password: "*********"
  index: "test-%{[agent.version]}-%{+yyyy.MM.dd}"

# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~
  - decode_json_fields:
      fields: ["message"]
      process_array: false
      max_depth: 2
      target: ""
      overwrite_keys: false
      add_error_key: true

Thanks,
Nitin

Perhaps take a look at

1 Like

Hi @stephenb

Thanks for your support.

I have followed 1st approach from below two approaches...

  1. Set it all up in filebeat (when you do this create a default ILM policy for you, which you can later edit)
  2. setup your template, policy and rollover alias etc... etc. in elasticsearch then use minimal filebeat config

I have implemented the approach 1 and configure filebeat.yml file (See below configuration). Its working and now, logs being ingested into dedicated index.

# ============================== Filebeat inputs ===============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

# filestream is an input for collecting log messages from files.
- type: filestream

  # Change to true to enable this input configuration.
  enabled: true
  paths:
    #- /var/log/*.log
    - D:\Users\ABC\Downloads\audit-logs*.csv
  #fields:
   # category: audit_log
  #json.keys_under_root: true
  #json.overwrite_keys: true
  #json.add_error_key: true
  #json.expand_keys: true
  max_bytes: 104857600000

# ============================== Filebeat modules ==============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false


# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
setup.dashboards.enabled: true

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
#setup.kibana.host: "http://localhost:5601/"
#setup.kibana.protocol: "http"
  host: "http://localhost:5601/"
  #space.id: log
  protocol: "http"

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["http://localhost:9200/"]

  # Protocol - either `http` (default) or `https`.
  #protocol: "http"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  username: "********"
  password: "********"

  
setup.ilm:
      enabled: true
      policy_name: "myindex"
      overwrite: true
      rollover_alias: "myindex-alias-%{[agent.version]}"
      pattern: "{now/d}-0000001"  

# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~
  - decode_json_fields:
      fields: ["message"]
      process_array: false
      max_depth: 2
      target: ""
      overwrite_keys: false
      add_error_key: true



1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.