Logs in discovery update very rarely

Hello,

First of all I should say that I'm very new to elasticsearch/kibana and I'm still really struggling to wrap my head around the whole thing.

I've got a weird issue with elasticsearch not updating the information it receives from filebeat in a timely manner in Discovery and I'm not sure why that is.
I'm running a cluster of three master-data nodes in docker (deployed with ansible, if it's of any relevance) with the following settings on all nodes called es01, es02 and es03:

    env:
      node.name: "es01"
      cluster.name: "es-docker-cluster"
      discovery.seed_hosts: "es02,es03"
      cluster.initial_master_nodes: "es01,es02,es03"
      bootstrap.memory_lock: "true"
      xpack.security.enabled: "true"
      xpack.security.transport.ssl.enabled: "true"
      xpack.security.transport.ssl.keystore.type: "PKCS12"
      xpack.security.transport.ssl.verification_mode: "certificate"
      xpack.security.transport.ssl.keystore.path: "certs/elastic-certificates.p12"
      xpack.security.transport.ssl.truststore.path: "certs/elastic-certificates.p12"
      xpack.security.transport.ssl.truststore.type: "PKCS12"
      ES_JAVA_OPTS: "-Xms2048M -Xmx2048M"

(the discovery seed hosts are adjusted accordingly)

This is my filebeat yaml configuration:

setup.ilm.enabled: true
setup.ilm.rollover_alias: "dms-logs"
processors:
  - decode_json_fields:
      fields: ["message"]
      target: "app"
      overwrite_keys: true
      add_error_key: true
logging.level: info
logging.to_syslog: false
logging.to_files: true
logging.to_stderr: true
filebeat.autodiscover:
  providers:
    - type: docker
      templates:
        - condition:
            contains:
              docker.container.image: nginx
          config:
            - type: container
              paths:
                - '/var/lib/docker/containers/${data.docker.container.id}/*.log'
            - module: nginx
              access:
                enabled: true
                containers:
                  stream: "stdout"
              error:
                enabled: true
                containers:
                  stream: "stderr"
        - condition:
            contains:
              docker.container.image: postgres
          config:
            - type: container
              paths:
                - '/var/lib/docker/containers/${data.docker.container.id}/*.log'
            - module: postgresql
        - condition:
            equals:
              docker.container.labels:
                eu.atenekom.application: "dms"
          config:
            - type: container
              paths:
                - '/var/lib/docker/containers/${data.docker.container.id}/*.log'
setup.kibana:
  host: "http://dev-logging.atene-webtools.eu"
setup.dashboards.enabled: true

But what I do is actually start an initial filebeat container that sets things up with the following options:

     - setup --index-management --dashboards
     - -E 'output.elasticsearch.hosts=[mydomain.com:9200]'
     - -E 'output.elasticsearch.username="elastic"'
     - -E 'output.elasticsearch.password="password"'

Then I remove it and then the final container with the following arguments:

     - filebeat
     - -E 'output.elasticsearch.hosts=[mydomainc:9200]'
     - -E 'output.elasticsearch.username="elastic"'
     - -E 'output.elasticsearch.password="password"'
     - --modules system,nginx,mysql,postgresql

I am seeing that number of documents being updated form time to time when I query for indices (I'm interested in dms-logs):

health status index                      uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   dms-logs-2020.07.11-000001 Z1nZeC8iSO-4wPRfpVOS3g   1   1    1225651            0    275.9mb        136.2mb
green  open   .security-7                rpvjdtGZSECIyTf6_uLuIA   1   1         46            4    155.7kb         77.8kb
green  open   .apm-custom-link           qk2VdJOFSiChtvFBU5U1wg   1   1          0            0       416b           208b
green  open   .kibana_task_manager_1     P1JVd40LRFu28S0SSDBVsg   1   1          5            6     64.5kb         32.5kb
green  open   .apm-agent-configuration   b2luv1AyTLmmruhPOuDA7g   1   1          0            0       416b           208b
green  open   .async-search              W3nNisiaQLy9W_i-6PlGZQ   1   1         24            0     43.3mb         21.6mb
green  open   .kibana_1                  BleTbo9-T-mX_emq_-1KPg   1   1       1813           56      1.6mb        871.4kb

Unfortunately I'm not seeing anything nothing new when I go to discovery. For instance, today I'm seeing the latest logs from 12:42, but the clock is already 17:02.

Other times the timing was right and I could see the latest logs for some reason. I haven't really identified a pattern. I'm not sure where I should start debugging.
Filebeat keeps sending the logs, as far as I can tell from its logs:

2020-07-12T15:03:04.559Z	INFO	[monitoring]	log/log.go:145	Non-zero metrics in the last 30s	{
  "monitoring": {
    "metrics": {
      "beat": {
        "cpu": {
          "system": {
            "ticks": 1210,
            "time": {
              "ms": 8
            }
          },
          "total": {
            "ticks": 3270,
            "time": {
              "ms": 11
            },
            "value": 3270
          },
          "user": {
            "ticks": 2060,
            "time": {
              "ms": 3
            }
          }
        },
        "handles": {
          "limit": {
            "hard": 1048576,
            "soft": 1048576
          },
          "open": 13
        },
        "info": {
          "ephemeral_id": "c7f11166-a12e-43d0-9fa9-bc8c8dc5f461",
          "uptime": {
            "ms": 2160034
          }
        },
        "memstats": {
          "gc_next": 11886416,
          "memory_alloc": 7684552,
          "memory_total": 225525040
        },
        "runtime": {
          "goroutines": 115
        }
      },
      "filebeat": {
        "harvester": {
          "open_files": 2,
          "running": 2
        }
      },
      "libbeat": {
        "config": {
          "module": {
            "running": 0
          }
        },
        "pipeline": {
          "clients": 12,
          "events": {
            "active": 0
          }
        }
      },
      "registrar": {
        "states": {
          "current": 2
        }
      },
      "system": {
        "load": {
          "1": 0.34,
          "15": 0.73,
          "5": 0.51,
          "norm": {
            "1": 0.1133,
            "15": 0.2433,
            "5": 0.17
          }
        }
      }
    }
  }
}

I also see this in only one of the nodes, es01 (es02 is currently the master):

{
  "type": "server",
  "timestamp": "2020-07-12T14:50:33,919Z",
  "level": "INFO",
  "component": "o.e.x.s.a.AuthenticationService",
  "cluster.name": "es-docker-cluster",
  "node.name": "es01",
  "message": "Authentication of [kibana] was terminated by realm [reserved] - failed to authenticate user [kibana]",
  "cluster.uuid": "u9v0J9RxRwaNZwJh4OJJAQ",
  "node.id": "gV9jkkleQTaB7_HhDekoLA"
}

But I'm guessing this should be just an intermediate phase, because I can connect to kibana and run all these queries etc., so I'm not exactly sure if that's an actual error and I would have expected it to repeat itself if that were permanent.

Hi!

First of all we need to make sure what logs Filebeat is aiming to collect. In Filebeat's configuration I see that autodiscover is enabled and no other input. Is this what we want? Is so we need to make sure that Autodiscover config is correct and the path of the docker logs is correct. Note here that if Filebeat is running inside a container it should be sharing the proper volume with the host in order to have access to all containers' log files.

Then I see that you run --modules system,nginx,mysql,postgresql which means that it enables the modules mentioned. With this approach the modules will use the default configuration which might not be suitable in your case/environment (are paths of log files valid?). In most cases you will need to predefine the modules' configs and load them inside the Filebeat's container.

C.

Hello!

Thank you for the prompt answer.
I don't understand what you mean by 'no other input', exactly. The 'providers:' section is input and this is how the autodiscover is configured, if that's what you mean. That's pretty straightforward.

Yes, these are the volumes/bindmounts set on filebeat:

    volumes:
      - /var/lib/docker/containers:/varlib/docker/containers:ro
      - /var/log:/var/log:ro
      - srv:/srv
      - /var/run/docker.sock:/var/run/docker.sock

Yes, I didn't know exactly how enabling nginx, mysql or postgresql modules works, I thought it could also be somehow related to autodiscovery, but that doesn't make sense, if you state the module inside the autodiscovery section, I suppose, right? So --modules mysql is going to search under /var/lib/mysql and that's that, if I'm correct.

I just enabled them to force a 'reaction' somehow and see what happens.
As far as the system module is concerned, well, as you see above, /var/log/ is already mounted so it does read the logs there, but, as I said, the "Discovery" shows the logs at random intervals, with great latency, other times it so happens that it shows them straight away.

I'm not really greedy, to be honest, I'd like to just see that the system modules works and that the filebeat can read the auth/syslog logs and that it can output some sensible data :slight_smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.