Filebeat logs not being published to Elastic

I am sending csv data from filebeat to elastic search following this guide:

I have created an ingest pipeline, tested it with the _simulate api, and everything looks fine.

This is my filebeat configuration:

      filebeat.yml: |
          - type: log
            enabled: true
              - /test/*.csv
            exclude_lines: ['^StartTimeMs']
            enabled: true
            hosts: ["{{ .Values.filebeat.elasticsearch.connection.url }}"]
            path: "{{ .Values.filebeat.elasticsearch.connection.path }}"
            username: {{ .Values.filebeat.elasticsearch.credentials.username }}
            password: {{ .Values.filebeat.elasticsearch.credentials.password }}
              -index: "binaryload"
            pipeline: "binaryload-statistics"
        logging.metrics.enabled: false
        logging.level: debug

In the filebeat logs I see where the event is published:

    2021-02-08T13:37:17.062Z        DEBUG   [processors]    processing/processors.go:187    Publish event: {
      "@timestamp": "2021-02-08T13:37:17.062Z",
      "@metadata": {
        "beat": "filebeat",
        "type": "_doc",
        "version": "7.8.0"
      "ecs": {
        "version": "1.5.0"
      "host": {
        "name": "trafficgen-binaryload-g2kg9"
      "agent": {
        "id": "69cfab42-7e54-4ad3-a841-8d1098d8b0af",
        "name": "trafficgen-binaryload-g2kg9",
        "type": "filebeat",
        "version": "7.8.0",
        "hostname": "trafficgen-binaryload-g2kg9",
        "ephemeral_id": "e4e5f316-bd0f-4273-8487-5595b761c6cb"
      "log": {
        "file": {
          "path": "/test/roundtrip_latency_GTP.csv"
        "offset": 92699
      "message": "1612791435943,1612791436942,999,\"h:|i:1|m:0\",\"mainmessage\",0,858,0,0,0,0,0,0,0,205920,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0",
      "input": {
        "type": "log"

On the elastic side, I don't see any errors in the logs, but also I don't see either any new indices created, or any existing indices updated. I can't figure out what is going wrong.

I have tried manually creating an index on elastic called 'binaryload' but it doesn't get updated either.

What is the output from the _cat/indices?v API in Elasticsearch?

As noted, You defined a custom ingest pipeline binaryload-statistics. I would take that out of the Filebeat config and see if the data is indexed (even though you tested it with simulate) If so then you might still have an issue / mismatch with the pipeline.

If the data is not there than might have a connection issue, do you see in the Filebeat logs the established connection message?

@warkolm : Thanks for your reply !

Nothing shown in the indices, the filebeat index listed doesn't have anything from my filebeat instance. That index is from other filebeat instances running in the cluster:

[john@fedora ums-assembly]$ elastic-index-list-all 
Pod: monitoring-elasticsearch-es-default-0
health status index                              uuid                   pri rep docs.count docs.deleted store.size
green  open   .kibana-event-log-7.8.0-000001     6_07eg70QeSAQ4KWE8nY5A   1   0          1            0      5.3kb          5.3kb
green  open   .security-7                        HAr-wNx0Q5K5O6qzn1HbYg   1   0         37            0    109.2kb        109.2kb
green  open   .apm-custom-link                   a7hftk9MRR-CwIWyf6WSxA   1   0          0            0       208b           208b
green  open   .kibana_task_manager_1             atiUzVCqSeKIO_UInARiPw   1   0          5            1     31.1kb         31.1kb
yellow open   filebeat-7.8.0-2021.02.08-000001   o7MnRsuLTfisBgLPWVyNFg   1   1      55683            0     29.4mb         29.4mb
green  open   .apm-agent-configuration           JrYoAHzKSdy5OYFBWiCEqw   1   0          0            0       208b           208b
yellow open   metricbeat-7.8.0-2021.02.08-000001 cCh40_V3QtqIgZN1LcT9YQ   1   1    2816931            0        2gb            2gb
green  open   .kibana_1                          T8PHYXqhTDycGhfK5dMZEQ   1   0       1635            0      966kb          966kb
yellow open   metricbeat-7.8.0-2021.02.09-000002 mpi_0hR9R3GpZYI6A_IkMA   1   1    1208718            0    911.9mb        911.9mb

@stephenb : Thanks for your reply!

I am ruling out connectivity problems because I had connectivity problems in an earlier configuration and you would see the failed connection attempts in the filebeat logs. Based on this log entry it looks like it reached elastic:

2021-02-09T07:58:38.677Z        DEBUG   [elasticsearch] elasticsearch/client.go:229     PublishEvents: 4 events have been published to elasticsearch in 187.494896ms.

If I log onto the filebeat pod, and do this it looks fine:

bash-4.2$ curl -u "elastic:1q2w3e4r5t6y7u" -X GET
  "name" : "monitoring-elasticsearch-es-default-0",
  "cluster_name" : "monitoring-elasticsearch",
  "cluster_uuid" : "OLOPgT24ReeEMQlUrpdo_A",
  "version" : {
    "number" : "7.8.0",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "757314695644ea9a1dc2fecd26d1a43856725e65",
    "build_date" : "2020-06-14T19:35:50.234439Z",
    "build_snapshot" : false,
    "lucene_version" : "8.5.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  "tagline" : "You Know, for Search"

I took out the reference to the pipeline from the filebeat.yml, it didn't behave any differently. The logs look the same as if the pipeline entry was still there. Same Publish event. I have this warning on the elastic side:

{"type": "server", "timestamp": "2021-02-09T07:50:36,600Z", "level": "WARN", "component": "o.e.x.s.a.AuthenticationService", "": "monitoring-elasticsearch", "": "monitoring-elasticsearch-es-default-0", "message": "Authentication to realm file1 failed - Password authentication failed for elastic", "cluster.uuid": "OLOPgT24ReeEMQlUrpdo_A", "": "-PrZ2kyrR5KYo9lL7l_n2w"  }

I have seen other posts about this issue so I will follow them and see what's happening. I don't think this is related to the events I am publishing as it appears after the cluster starts up. I have been tailing the elastic log while running my filebeat and nothing comes out then.

I have done more checking on the pipeline. I created an index called binaryload-statistics and then indexed a document with the pipeline I created:

[john@fedora bin]$ curl -s -X PUT '' -H "Content-Type: application/json" -u "elastic:${ELASTIC_SECRET_UMS}" -d '@../etc/index_document.json' | jq
  "_index": "binaryload-statistics",
  "_type": "_doc",
  "_id": "1",
  "_version": 3,
  "result": "updated",
  "_shards": {
    "total": 2,
    "successful": 1,
    "failed": 0
  "_seq_no": 2,
  "_primary_term": 1
[john@fedora bin]$ 

Then if I do a GET on the document:

[john@fedora bin]$ curl -s -X GET '' -u "elastic:${ELASTIC_SECRET_UMS}" | jq{
  "_index": "binaryload-statistics",
  "_type": "_doc",
  "_id": "1",
  "_version": 3,
  "_seq_no": 2,
  "_primary_term": 1,
  "found": true,
  "_source": {
    "MsgName": "01_CCRU",
    "SLA5us(P)": 0,
    "MsgSent(C)": 115,
    "BytesRecv(P)": 448,
    "MsgError(C)": 0,
    "StatisticId": "|m:0",
    "MsgRecv(C)": 112,
    "BytesSent(P)": 90160,
    "TxnMinLatUs(P)": 10902,
    "SLA2us(P)": 6,
    "TPS(C)": 112,
    "MsgRecv(P)": 112,
    "EndTimeMs": "2013-07-22T05:29:38.506Z",
    "MsgError(P)": 0,
    "SLA1us(P)": 0,
    "TxnAvgLatUs(C)": 20412,
    "BytesSent(C)": 90160,
    "MsgUnexp(C)": 0,
    "BytesRecv(C)": 448,
    "SLA4us(P)": 98,
    "TxnMaxLatUs(C)": 47153,
    "TxnAvgLatUs(P)": 20412,
    "message": "1374470977504,1374470978506,1002,\"|m:0\",\"01_CCRU\",115,115,112,112,0,0,0,0,90160,90160,448,448,10902,10902,47153,47153,20412,20412,112,112,0,6,8,98,0,0",
    "ElapsedTimeMs": 1002,
    "TPS(P)": 112,
    "StartTimeMs": "2013-07-22T05:29:37.504Z",
    "SLA3us(P)": 8,
    "MsgUnexp(P)": 0,
    "MsgSent(P)": 115,
    "TxnMinLatUs(C)": 10902,
    "TxnMaxLatUs(P)": 47153,
    "SLA6us(P)": 0

Then if I do cat indices, it appears:

health status index                              uuid                   pri rep docs.count docs.deleted store.size
yellow open   binaryload-statistics              vXyctIOQSoGBBcwTDhBacw   1   1          1            0     13.1kb         13.1kb

All looks fine. It's not clear how elastic knows which index to publish the document into. There doesn't seem to be a mention of the index in the filebeat payload sent to elastic.

Ultimately I was able to figure out what was happening using Wireshark. I just had to add these fields to the filebeat.yml:

setup.ilm.enabled: true
setup.ilm.rollover_alias: "binaryload-statistics"
setup.ilm.pattern: "{now/d}-000001" "binaryload-statistics"
setup.template.type: "index"
setup.template.pattern: "binaryload-*"

Without that it either sent it to the filebeat index, or elastic just appeared to swallow it without complaining.

1 Like