Hi guys!
I am not being able to get Filebeat to write documents to indices based on a field.
This is my configuration:
filebeat.inputs:
- type: log
# Change to true to enable this input configuration.
enabled: true
json.keys_under_root: true
json.message_key: log
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /home/XXX/DEV/suppliers/2020-10/*/*.log
- /home/XXX/DEV/suppliers/2020-11/*/*.log
- /home/XXX/DEV/suppliers/2020-12/*/*.log
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
#================================ Outputs =====================================
# Configure what output to use when sending the data collected by the beat.
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["xxx:9200"]
# Protocol - either `http` (default) or `https`.
#protocol: "https"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
username: "xxx"
password: "xxx"
index: "supplierslog-%{[fields.operation]}-%{+yyyy.MM.dd}"
#================================ Processors =====================================
# Configure processors to enhance or manipulate events generated by the beat.
processors:
# - add_host_metadata: ~
# - add_cloud_metadata: ~
# - add_docker_metadata: ~
# - add_kubernetes_metadata: ~
# decode the log field (sub JSON document) if JSONencoded, then maps it's fields to elasticsearch fields
- decode_json_fields:
fields: ["log"]
target: ""
# overwrite existing target elasticsearch fields while decoding json fields
overwrite_keys: true
- drop_fields:
fields: ["LOGS_HOME","level_value","agent","LOGS_SUPPLIER","level","host","thread_name","logger_name","input","ecs"]
ignore_missing: true
setup.template.name: "supplierfields"
setup.template.fields: "supplierfields.yml"
setup.template.overwrite: true
setup.template.pattern: "supplierslog-*"
setup.ilm.rollover_alias: "supplierslog"
setup.ilm.enabled : false
This is an example line that is ingested by FileBeat:
{"@timestamp": "2020-10-07T07: 34: 14.120-03: 00", "@ version": "1", "login": "operator.esteban", "operation": "Search"}
This is the stack error:
2020-10-07T10:19:37.999-0300 INFO pipeline/output.go:105 Connection to backoff(elasticsearch(http://xxx:9200)) established
2020-10-07T10:19:45.889-0300 ERROR pipeline/output.go:121 Failed to publish events: temporary bulk send failure
But If I change the name of the index
index: "supplierslog-%{[fields.operation]}-%{+yyyy.MM.dd}
to
index: "supplierslog-%{+yyyy.MM.dd}"
works fine.
What am I doing wrong?
Thanks in advance,
Pablo
Filebeat version: 7.6.2