Filebeat checking for x-pack despite monitoring disabled

Filebeat is failing to connect to Elasticsearch only in dev, despite having the same config as prod. The ping to the ES host returns a 200, but Filebeat is then attempting to connect to an x-pack endpoint, even though AWS ES does not have x-pack. I tried disabling monitoring and it continues to try to connect to x-pack despite that the config is the same as prod (other than the index name), which does not try to reach the x-pack endpoint.

Here is the config:

filebeat.registry_flush: 10s
filebeat.inputs:
- type: docker
  exclude_lines: '.*health.*|.*HealthChecker.*'
  multiline:
    pattern: '^(\[20|20)\d\d[- /.](0[1-9]|1[012])[- /.](0[1-9]|[12][0-9]|3[01])'
    negate: true
    match: after
  ignore_older: 48h
  clean_inactive: 72h
  scan_frequency: 1m
  combine_partial: true
  processors:
    - add_docker_metadata: ~
  containers:
    path: "/var/lib/docker/containers"
    stream: "all"
    ids:
      - "*"  

  enabled: true
- type: log
  multiline:
    pattern: '^(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)\s+([0-3][0-9]|[1-9])'
    negate: true
    match: after
  ignore_older: 48h
  clean_inactive: 72h
  scan_frequency: 1m
  combine_partial: true
  paths: 
    - /var/log/secure

  enabled: true  

#==================== Elasticsearch template setting ==========================

setup.template:
  name: 'filebeat'
  pattern: 'filebeat-dev-*'
  fields: 'fields.yml'
setup.template.settings:
  index.number_of_shards: 5
  index.codec: best_compression
  index.mapper.dynamic: false
  _source: 
    excludes: [
      "beat.version",
      "docker.container.labels.com.docker.compose.config-hash",
      "docker.container.labels.com.docker.compose.container-number",
      "docker.container.labels.com.docker.compose.oneoff",
      "docker.container.labels.com.docker.compose.version",
      "docker.container.labels.license",
      "docker.container.labels.maintainer",
      "docker.container.labels.site",
      "docker.container.labels.vendor",
      "docker.container.labels.org.label-schema.schema-version",
      "docker.container.labels.org.label-schema.url",
      "docker.container.labels.org.label-schema.vcs-url",
      "docker.container.labels.org.label-schema.vendor",
      "docker.container.labels.org.label-schema.version",
      "docker.container.labels.io.confluent.docker.build.number",
      "docker.container.labels.io.confluent.docker.git.id",
      "docker.container.labels.io.confluent.docker.value",
      "docker.container.labels.io.k8s.description",
      "docker.container.labels.io.k8s.display-name",
      "docker.container.labels.io.openshift.expose-services",
      "docker.container.labels.io.openshift.s2i.assemble-input-files",
      "docker.container.labels.io.openshift.s2i.build.commit.author",
      "docker.container.labels.io.openshift.s2i.build.commit.date",
      "docker.container.labels.io.openshift.s2i.build.commit.id",
      "docker.container.labels.io.openshift.s2i.build.commit.message",
      "docker.container.labels.io.openshift.s2i.build.commit.ref",
      "docker.container.labels.io.openshift.s2i.build.image",
      "docker.container.labels.io.openshift.s2i.build.source-context-dir",
      "docker.container.labels.io.openshift.s2i.build.source-location",
      "docker.container.labels.io.openshift.s2i.scripts-url",
      "docker.container.labels.io.openshift.tags",
      "docker.container.labels.license",
      "docker.container.labels.maintainer",
      "docker.container.labels.name",
      "docker.container.labels.org.label-schema.build-date",
      "docker.container.labels.org.label-schema.license",
      "docker.container.labels.org.label-schema.name",
      "docker.container.labels.org.label-schema.schema-version",
      "docker.container.labels.org.label-schema.url",
      "docker.container.labels.org.label-schema.vcs-url",
      "docker.container.labels.org.label-schema.vendor",
      "docker.container.labels.org.label-schema.version",
      "docker.container.labels.quay.expires-after",
      "docker.container.labels.vendor",
      "host.architecture",
      "host.containerized",
      "host.id",
      "host.os.codename",
      "host.os.family",
      "host.os.platform",
      "host.os.version",
      "host.os.name",
      "offset",
      "log.flags",
      "log.file.path"
    ]

#================================ Outputs =====================================
#setup.dashboards.enabled: true
setup.kibana.host: "https://elasticsearch_host_redacted.us-east-1.es.amazonaws.com:443/_plugin/kibana"

xpack.monitoring.enabled: false
output.elasticsearch:

{% if filebeat_output_elasticsearch_enabled %}
  ### Elasticsearch as output
    # Array of hosts to connect to.
    hosts: ["https://elasticsearch_host_redacted.us-east-1.es.amazonaws.com:443"]

    # Number of workers per Elasticsearch host.
    #worker: 1

    # Optional index name. The default is "filebeat" and generates
    #[filebeat-]YYYY.MM.DD keys.
    index: "filebeat-dev-%{+yyyy.MM.dd}"

{% endif %}

#================================ Logging =====================================

{% if filebeat_enable_logging %}
logging.level: {{ filebeat_log_level }}

  # Enable file rotation with default configuration

logging.files:
  path: {{ filebeat_log_dir }}
  name: {{ filebeat_log_filename }}
  keepfiles: 7
  permissions: 0644
{% endif %}

could you please share log lines of this attempt? I would like to see which endpoints it is trying to reach and at which point of the lifecycle.

Sure thing, here are the log lines that are re-occurring:

ERROR pipeline/output.go:100 Failed to connect to backoff(elasticsearch(https://es.us-east-1.es.amazonaws.com:443/)): Connection marked as failed because the onConnect callback failed: cannot retrieve the elasticsearch license: unauthorized access, could not connect to the xpack endpoint, verify your credentials

2019-04-02T13:39:05.920Z INFO pipeline/output.go:93 Attempting to reconnect to backoff(elasticsearch(https://es.es.amazonaws.com:443/)) with 7242 reconnect attempt(s)

2019-04-02T13:39:05.920Z DEBUG [elasticsearch] elasticsearch/client.go:715 ES Ping(url=https://es.us-east-1.es.amazonaws.com:443/)

2019-04-02T13:39:05.920Z INFO [publish] pipeline/retry.go:189 retryer: send unwait-signal to consumer

2019-04-02T13:39:05.920Z INFO [publish] pipeline/retry.go:191 done

2019-04-02T13:39:05.920Z INFO [publish] pipeline/retry.go:166 retryer: send wait signal to consumer

2019-04-02T13:39:05.920Z INFO [publish] pipeline/retry.go:168 done

2019-04-02T13:39:05.935Z DEBUG [elasticsearch] elasticsearch/client.go:738 Ping status code: 200

2019-04-02T13:39:05.935Z INFO elasticsearch/client.go:739 Attempting to connect to Elasticsearch version 6.2.3

2019-04-02T13:39:05.935Z DEBUG [elasticsearch] elasticsearch/client.go:757 GET https://es.us-east-1.es.amazonaws.com:443/_xpack?human=false <nil>

There is another thread with the same problem. It seems related to something added in v6.7 to check the license.

I solved it by downloading and installing manually filebeat OSS instead of using the package from APT repository.

2 Likes

I'm using the yum repo, but I'm confused as to why I have the exact same installation on my prod machines and I am not experiencing this problem on those machines

Yep-- it's because there is somehow a different filebeat version in prod than in dev. I will try to downgrade to the same version as prod. Thanks!

Yeah, it started happening in version 6.7. I think it's a licensing control they've added.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.