How to configure filebeat template

HI,
I'm setting up a Filebeat with elasticsearch output, when i setting two index , I setting two template in filebeat.yml

setup.template.name: "nginx"
setup.template.overwrite: false
setup.template.pattern: "nginx-*"

setup.template.name: "ro"
setup.template.pattern: "ro*"

but when i start filebeat , i found err log

2018-01-17T17:02:36+08:00 INFO Connected to Elasticsearch version 6.1.1
2018-01-17T17:02:36+08:00 INFO Template already exists and will not be overwritten.
2018-01-17T17:02:37+08:00 ERR  Failed to publish events: temporary bulk send failure

Hi @zhangrandl,

You cannot set 2 different templates like that, the second definition is overriding the previous one. That say, I'm not sure what the error is about. Could you please share the full log output, it should give some more context?

2018-01-17T17:01:57+08:00 INFO Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2018-01-17T17:01:57+08:00 INFO Metrics logging every 30s
2018-01-17T17:01:57+08:00 INFO Beat UUID: 35d7289b-8cd1-4ca1-aef9-3720a449de2c
2018-01-17T17:01:57+08:00 INFO Setup Beat: filebeat; Version: 6.1.1
2018-01-17T17:01:57+08:00 INFO Elasticsearch url: http://localhost:9200
2018-01-17T17:01:57+08:00 INFO Beat name: elk
2018-01-17T17:01:57+08:00 INFO Elasticsearch url: http://localhost:9200
2018-01-17T17:01:57+08:00 INFO Connected to Elasticsearch version 6.1.1
2018-01-17T17:01:57+08:00 INFO Kibana url: http://localhost:5601
2018-01-17T17:02:20+08:00 INFO Kibana dashboards successfully loaded.
2018-01-17T17:02:20+08:00 INFO filebeat start running.
2018-01-17T17:02:20+08:00 INFO Registry file set to: /var/lib/filebeat/registry
2018-01-17T17:02:20+08:00 INFO Loading registrar data from /var/lib/filebeat/registry
2018-01-17T17:02:20+08:00 INFO States Loaded from registrar: 3
2018-01-17T17:02:20+08:00 INFO Loading Prospectors: 1
2018-01-17T17:02:20+08:00 WARN BETA: Dynamic config reload is enabled.
2018-01-17T17:02:20+08:00 INFO Loading and starting Prospectors completed. Enabled prospectors: 0
2018-01-17T17:02:20+08:00 INFO Starting Registrar
2018-01-17T17:02:20+08:00 INFO Config reloader started
2018-01-17T17:02:27+08:00 INFO Non-zero metrics in the last 30s: beat.info.uptime.ms=30003 beat.memstats.gc_next=4194304 beat.memstats.memory_alloc=3271528 beat.memstats.memory_total=9512912 filebeat.harvester.open_files=0 filebeat.harvester.running=0 libbeat.config.module.running=0 libbeat.output.type=elasticsearch libbeat.pipeline.clients=0 libbeat.pipeline.events.active=0 registrar.states.current=0
2018-01-17T17:02:30+08:00 INFO Starting 2 runners ...
2018-01-17T17:02:30+08:00 INFO Elasticsearch url: http://localhost:9200
2018-01-17T17:02:30+08:00 INFO Connected to Elasticsearch version 6.1.1
2018-01-17T17:02:30+08:00 INFO Starting prospector of type: log; ID: 9213110065078905940
2018-01-17T17:02:30+08:00 INFO Elasticsearch url: http://localhost:9200
2018-01-17T17:02:30+08:00 INFO Harvester started for file: /root/ro/login2018-01-12.log
2018-01-17T17:02:30+08:00 INFO Connected to Elasticsearch version 6.1.1
2018-01-17T17:02:30+08:00 INFO Starting prospector of type: log; ID: 7706456768349405474
2018-01-17T17:02:30+08:00 INFO Starting prospector of type: log; ID: 1460876766815925734
2018-01-17T17:02:30+08:00 INFO Harvester started for file: /var/log/nginx/access.log
2018-01-17T17:02:31+08:00 INFO Connected to Elasticsearch version 6.1.1
2018-01-17T17:02:31+08:00 INFO Template already exists and will not be overwritten.
2018-01-17T17:02:33+08:00 ERR  Failed to publish events: temporary bulk send failure
2018-01-17T17:02:33+08:00 INFO Connected to Elasticsearch version 6.1.1
2018-01-17T17:02:33+08:00 INFO Template already exists and will not be overwritten.
2018-01-17T17:02:34+08:00 ERR  Failed to publish events: temporary bulk send failure
2018-01-17T17:02:34+08:00 INFO Connected to Elasticsearch version 6.1.1
2018-01-17T17:02:34+08:00 INFO Template already exists and will not be overwritten.
2018-01-17T17:02:35+08:00 ERR  Failed to publish events: temporary bulk send failure
2018-01-17T17:02:35+08:00 INFO Connected to Elasticsearch version 6.1.1
2018-01-17T17:02:35+08:00 INFO Template already exists and will not be overwritten.
2018-01-17T17:02:36+08:00 ERR  Failed to publish events: temporary bulk send failure
2018-01-17T17:02:36+08:00 INFO Connected to Elasticsearch version 6.1.1
2018-01-17T17:02:36+08:00 INFO Template already exists and will not be overwritten.
2018-01-17T17:02:37+08:00 ERR  Failed to publish events: temporary bulk send failure
2018-01-17T17:02:37+08:00 INFO Connected to Elasticsearch version 6.1.1
2018-01-17T17:02:37+08:00 INFO Template already exists and will not be overwritten.
2018-01-17T17:02:37+08:00 INFO retryer: send wait signal to consumer
2018-01-17T17:02:37+08:00 INFO   done
2018-01-17T17:02:38+08:00 ERR  Failed to publish events: temporary bulk send failure
2018-01-17T17:02:38+08:00 INFO retryer: send unwait-signal to consumer
2018-01-17T17:02:38+08:00 INFO   done
2018-01-17T17:02:38+08:00 INFO retryer: send wait signal to consumer
2018-01-17T17:02:38+08:00 INFO   done
2018-01-17T17:02:38+08:00 INFO Connected to Elasticsearch version 6.1.1
2018-01-17T17:02:38+08:00 INFO Template already exists and will not be overwritten.
2018-01-17T17:02:38+08:00 INFO retryer: send unwait-signal to consumer
2018-01-17T17:02:38+08:00 INFO   done
2018-01-17T17:02:39+08:00 ERR  Failed to publish events: temporary bulk send failure
2018-01-17T17:02:39+08:00 INFO Connected to Elasticsearch version 6.1.1
2018-01-17T17:02:39+08:00 INFO Template already exists and will not be overwritten.
2018-01-17T17:02:40+08:00 INFO Stopping filebeat
2018-01-17T17:02:40+08:00 INFO Stopping Crawler
2018-01-17T17:02:40+08:00 INFO Stopping 0 prospectors
2018-01-17T17:02:40+08:00 INFO Dynamic config reloader stopped
2018-01-17T17:02:40+08:00 INFO Crawler stopped
2018-01-17T17:02:40+08:00 INFO Stopping Registrar
2018-01-17T17:02:40+08:00 INFO Ending Registrar
2018-01-17T17:02:40+08:00 INFO Total non-zero values:  beat.info.uptime.ms=42367 beat.memstats.gc_next=4260096 beat.memstats.memory_alloc=3571248 beat.memstats.memory_total=15293808 filebeat.events.active=53 filebeat.events.added=58 filebeat.events.done=5 filebeat.harvester.open_files=2 filebeat.harvester.running=2 filebeat.harvester.started=2 libbeat.config.module.running=2 libbeat.config.module.starts=2 libbeat.config.reloads=1 libbeat.output.read.bytes=34652 libbeat.output.type=elasticsearch libbeat.output.write.bytes=114919 libbeat.pipeline.clients=0 libbeat.pipeline.events.active=53 libbeat.pipeline.events.filtered=5 libbeat.pipeline.events.published=53 libbeat.pipeline.events.retry=206 libbeat.pipeline.events.total=58 registrar.states.current=3 registrar.states.update=5 registrar.writes=6
2018-01-17T17:02:40+08:00 INFO Uptime: 42.367617859s
2018-01-17T17:02:40+08:00 INFO filebeat stopped.

it's only this. if I have two index , I must write two templates in one files?

I create a module in /usr/share/filebeat/module/ro

manifest.yml

module_version: 1.0

var:
  - name: paths
    default:
      - "/root/ro/login*.log"

ingest_pipeline: ingest/pipeline.json
prospector: config/log.yml

pipeline.json

{
  "description": "Pipeline for parsing ro log messages",
  "processors": [
    {
      "grok": {
        "field": "message",
        "trace_match": true,
        "patterns": [
          "^\\[(?<ro.time>2(.*?))\\]\\s\\[(?<ro.loglv>(.*?))\\]\\sop=(?<ro.op>(.*?)),link_roleid=(?<ro.link_roleid>(.*?)),pid=(?<ro.pid>(.*?)),acct=(?<ro.acct>(.*?)),name=(?<ro.name>(.*?)),lv=(?<ro.lv>(.*?)),actype=(?<ro.actype>(.*?)),chnl=(?<ro.chnl>(.*?)),devid=(?<ro.devid>(.*?)),dev=(?<ro.dev>(.*?)),devlv=(?<ro.devlv>(.*?)),job=(?<ro.job>(.*?)),rtype=(?<ro.rtype>(.*?)),sex=(?<ro.sex>(.*?)),gold=(?<ro.gold>(.*?)),ip=(?<ro.ip>(.*?)),port=(?<ro.port>(.*?)),ag=(?<ro.ag>(.*?)),fd=(?<ro.fd>(.*?)),enver=(?<ro.enver>(.*?)),acctrd=(?<ro.acctrd>(.*?)),rolerd=(?<ro.rolerd>(.*?)),ver=(?<ro.ver>(.*?)),reason=(?<ro.reason>(.*$))"
        ]
      }
    }
  ]
}

log.yml

type: log
paths:
{{ range $i, $path := .paths }}
 - {{$path}}
{{ end }}
exclude_files: [".gz$"]
multiline:
  pattern: '^\['
  negate: true
  match: after

From what I see there is an error while indexing in the Elasticsearch side, perhaps it's related to your pipeline pattern? Please check Elasticsearch logs, they should be telling us something about what's going on.

Best regards

my elasticsearch use docker , I don't konw how to check elasticsearch logs

You can list running containers with docker ps, then use docker logs <container_name> to get logs from one of them

[2018-01-17T12:36:20,802][WARN ][o.e.d.a.a.i.t.p.PutIndexTemplateRequest] Deprecated field [template] used, replaced by [index_patterns]
[2018-01-17T12:36:21,807][WARN ][o.e.d.a.a.i.t.p.PutIndexTemplateRequest] Deprecated field [template] used, replaced by [index_patterns]
[2018-01-17T12:36:22,811][WARN ][o.e.d.a.a.i.t.p.PutIndexTemplateRequest] Deprecated field [template] used, replaced by [index_patterns]
[2018-01-17T12:36:23,829][WARN ][o.e.d.a.a.i.t.p.PutIndexTemplateRequest] Deprecated field [template] used, replaced by [index_patterns]
[2018-01-17T12:36:24,835][WARN ][o.e.d.a.a.i.t.p.PutIndexTemplateRequest] Deprecated field [template] used, replaced by [index_patterns]
[2018-01-17T12:36:25,839][WARN ][o.e.d.a.a.i.t.p.PutIndexTemplateRequest] Deprecated field [template] used, replaced by [index_patterns]
[2018-01-17T12:36:26,846][WARN ][o.e.d.a.a.i.t.p.PutIndexTemplateRequest] Deprecated field [template] used, replaced by [index_patterns]
[2018-01-17T12:36:27,850][WARN ][o.e.d.a.a.i.t.p.PutIndexTemplateRequest] Deprecated field [template] used, replaced by [index_patterns]
[2018-01-17T12:36:28,858][WARN ][o.e.d.a.a.i.t.p.PutIndexTemplateRequest] Deprecated field [template] used, replaced by [index_patterns]
[2018-01-17T12:36:29,863][WARN ][o.e.d.a.a.i.t.p.PutIndexTemplateRequest] Deprecated field [template] used, replaced by [index_patterns]
[2018-01-17T12:36:30,870][WARN ][o.e.d.a.a.i.t.p.PutIndexTemplateRequest] Deprecated field [template] used, replaced by [index_patterns]
[2018-01-17T12:36:31,877][WARN ][o.e.d.a.a.i.t.p.PutIndexTemplateRequest] Deprecated field [template] used, replaced by [index_patterns]
[2018-01-17T12:36:32,882][WARN ][o.e.d.a.a.i.t.p.PutIndexTemplateRequest] Deprecated field [template] used, replaced by [index_patterns]
[2018-01-17T12:36:33,891][WARN ][o.e.d.a.a.i.t.p.PutIndexTemplateRequest] Deprecated field [template] used, replaced by [index_patterns]
[2018-01-17T12:36:34,895][WARN ][o.e.d.a.a.i.t.p.PutIndexTemplateRequest] Deprecated field [template] used, replaced by [index_patterns]
[2018-01-17T12:36:35,912][WARN ][o.e.d.a.a.i.t.p.PutIndexTemplateRequest] Deprecated field [template] used, replaced by [index_patterns]
[2018-01-17T12:36:36,919][WARN ][o.e.d.a.a.i.t.p.PutIndexTemplateRequest] Deprecated field [template] used, replaced by [index_patterns]
[2018-01-17T12:36:37,924][WARN ][o.e.d.a.a.i.t.p.PutIndexTemplateRequest] Deprecated field [template] used, replaced by [index_patterns]
[2018-01-17T12:36:38,931][WARN ][o.e.d.a.a.i.t.p.PutIndexTemplateRequest] Deprecated field [template] used, replaced by [index_patterns]
[2018-01-17T12:36:39,938][WARN ][o.e.d.a.a.i.t.p.PutIndexTemplateRequest] Deprecated field [template] used, replaced by [index_patterns]
[2018-01-17T12:36:40,946][WARN ][o.e.d.a.a.i.t.p.PutIndexTemplateRequest] Deprecated field [template] used, replaced by [index_patterns]
[2018-01-17T12:36:41,951][WARN ][o.e.d.a.a.i.t.p.PutIndexTemplateRequest] Deprecated field [template] used, replaced by [index_patterns]
[2018-01-17T12:36:42,958][WARN ][o.e.d.a.a.i.t.p.PutIndexTemplateRequest] Deprecated field [template] used, replaced by [index_patterns]
[2018-01-17T12:36:55,030][INFO ][o.e.c.m.MetaDataCreateIndexService] [B4hofIP] [ro-6.1.1-2018.01.17] creating index, cause [auto(bulk api)], templates [ro, ro*], shards [3]/[1], mappings [doc]
[2018-01-17T12:36:55,090][INFO ][o.e.c.m.MetaDataMappingService] [B4hofIP] [ro-6.1.1-2018.01.17/QfTBsIyhSky0KnVHnfcE3g] update_mapping [doc]
[2018-01-17T12:36:55,127][INFO ][o.e.c.m.MetaDataCreateIndexService] [B4hofIP] [nginx-6.1.1-2018.01.17] creating index, cause [auto(bulk api)], templates [nginx], shards [3]/[1], mappings [doc]
[2018-01-17T12:36:55,188][INFO ][o.e.c.m.MetaDataMappingService] [B4hofIP] [nginx-6.1.1-2018.01.17/giUo4-_yRlGRSVEuuCsiSQ] update_mapping [doc]

this elasticsearch logs, I don't find error

Is the cluster under load? From the logs it looks like you are getting bulk rejections, check this blog post for a good explanation on how to troubleshoot that:

I only have one elasticsearch. the elasticsearch logs is before i start filebeat, when i start filebeat , elasticsearch don't show any log

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.