Kibana states "No Log Data Found" on ECK

I am running Elasticsearch (ECK) on Oracle Kubernetes Engine. The version in use is 7.14.1. The stack consists of: Elasticsearch, Kibana, Filebeat+Heartbeat+Metricbeat.
Each of these services runs properly and it is properly configured by following Elastic Cloud on Kubernetes [2.10] | Elastic.

Currently, we can search, inspect logs on Kibana and access the dashboard perfectly. Though, when entering on Stack Monitoring we are getting a:
Screenshot from 2022-08-14 13-57-12

When inspecting the deployment we are getting healthy (green) pods:
Screenshot from 2022-08-14 14-00-34

By looking for similar issues online, I have encountered the following material that did not turn out useful for our case:

https://www.reddit.com/r/elasticsearch/comments/r0qjjc/no_log_data_in_stack_monitoringhelup/

Do you have any other suggestion to follow? Log-indexes seem to be written perfectly and we can inspect them easily on Kibana Dev Tools. How can I connect them to Stack Monitoring? Thanks!

Is filebeat running somewhere as well (the screenshot only shows heartbeat and metricbeat)? You'll need that in order to ingest the ES logs into the expected filebeat-* index pattern.

Checking which indices are available (for example via _cat/indices) could be a useful next step as well as checking the filebeat process config and logs for any errors.

Sorry for not having added that. Filebeat is perfectly working as well:

Though it does not show when performing kubectl get beat could that be one of the issues?

Here the result of the call _cat/indices

Let me know if you need more data.

Are there documents in filebeat-* with service.type: elasticsearch? That's what the query from the stack monitoring server code is looking for kibana/get_logs.ts at main · elastic/kibana · GitHub

If you can send the filebeat configuration that might help

That field should get set when ingesting using the filebeat elasticsearch module.

I performed this search

GET filebeat-7.14.1/_search
{
  "query": {"term": {
    "service.type": {
      "value": "elasticsearch"
    }
  }}
}

I get many documents in reply, here a screenshot:

Here the whole filebeat configuration, taken basically from here: Quickstart | Elastic Cloud on Kubernetes [2.3] | Elastic

apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
  name: <META>
spec:
  type: filebeat
  version: 7.14.1
  elasticsearchRef:
    name: <ES_REF>
  kibanaRef:
    name: <KIBANA_REF>
  config:
    filebeat:
      autodiscover:
        providers:
          - type: kubernetes
            node: ${NODE_NAME}
            hints:
              enabled: true
              default_config:
                type: container
                paths:
                  - /var/log/containers/*${data.kubernetes.container.id}.log
    processors:
      - add_cloud_metadata: {}
      - add_host_metadata: {}
  daemonSet:
    podTemplate:
      spec:
        serviceAccountName: filebeat
        automountServiceAccountToken: true
        terminationGracePeriodSeconds: 30
        dnsPolicy: ClusterFirstWithHostNet
        hostNetwork: true # Allows to provide richer host metadata
        containers:
          - name: filebeat
            securityContext:
              runAsUser: 0
              # If using Red Hat OpenShift uncomment this:
              #privileged: true
            volumeMounts:
              - name: varlogcontainers
                mountPath: /var/log/containers
              - name: varlogpods
                mountPath: /var/log/pods
              - name: varlibdockercontainers
                mountPath: /var/lib/docker/containers
            env:
              - name: NODE_NAME
                valueFrom:
                  fieldRef:
                    fieldPath: spec.nodeName
        volumes:
          - name: varlogcontainers
            hostPath:
              path: /var/log/containers
          - name: varlogpods
            hostPath:
              path: /var/log/pods
          - name: varlibdockercontainers
            hostPath:
              path: /var/lib/docker/containers
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: filebeat
rules:
  - apiGroups: [""] # "" indicates the core API group
    resources:
      - namespaces
      - pods
      - nodes
    verbs:
      - get
      - watch
      - list
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
  - kind: ServiceAccount
    name: filebeat
    namespace: default
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io

Thank you again for your effort in helping!

Thanks @papers_hive ! I'm most familiar with using the elasticsearch filebeat module directly and I don't have much experience with hinted k8s configuration.

Do you know if the containers in question are hinted in a way that would activate that module for the elasticsearch container logs?

Do the docs that you found also contain a elasticsearch.cluster.uuid field?

It seems like those are the only two that are required by that getLogs function to show logs in the UI.

By rerunning and inspecting the before query, I saw that every record has the cluster.uuid in the form:

Screenshot from 2022-08-23 09-31-43

I do not know exactly how the containers would fetch the logs to elasticsearch. As both heartbeat and metricbeat were automatically working, I assumed the same for filebeat. And I can see it generates logs files and it is able to see the elasticsearch cluster and the uuid, though somehow these logs are not shipped to the correct place for Kibana to show them.

And I am a bit lost.

It's hard to tell from just a screenshot of the doc, but I checked the query on a 8.3 deployment and it looks like it's using the same fields I mentioned above.

If this works, then the UI should be able to show you the same logs as long as you're viewing the same cluster/timerange as the documents.

POST filebeat-*/_search
{
  "fields": [
    "@timestamp",
    "service.type",
    "elasticsearch.cluster.uuid",
    "message"
  ],
  "sort": {
    "@timestamp": {
      "order": "desc"
    }
  },
  "query": {
    "bool": {
      "filter": [
        {
          "term": {
            "service.type": "elasticsearch"
          }
        },
        {
          "exists": {
            "field": "elasticsearch.cluster.uuid"
          }
        }
      ]
    }
  },
  "_source": false
}

The "we are unable to diagnose why" is curious too.

It means this whole section is reaching the end without determining a cause either: kibana/reason.js at 7.14 · elastic/kibana · GitHub (checks in kibana/detect_reason.js at 7.14 · elastic/kibana · GitHub)

I wonder if there might be info the kibana logs that would point to a cause.

Hi! Sorry for the delayed reply. Got a bit behind in other stuff.

This is the reply when running that query on the cluster:

No problem and thanks for the response! That sure looks like it should match the query.

And you're definitely viewing the same cluster uuid shown in elastic.cluster.uuid right?

Have you checked your kibana logs or browser console for any errors?

I wonder if there might be some permissions problem causing trouble. Technically the query run by the UI is probably POST *:filebeat-*,filebeat-*/_search, so maybe try that to see if there might be some CCS execution problem as well.

Yes, we are viewing the same cluster uuid shown in elastic.cluster.uuid.

Where should I run that query? On the Kibana dashboard?

Same place you did the last one. I use kibana's dev tools usually.

I run that query on Dev Tools and here a screen shot of the reply:

Kind of as usual, it sees the index and there are >10k items in it. All with an uuid.

Yeah, that looks reasonable too. If you can upgrade to at least 7.15, you could try setting monitoring.ui.debug_mode: true to see what queries the UI is executing exactly.

Or on 7.17 you could enable APM traces as shown on 3 tips to identify Kibana optimizing potential | Elastic Blog - that would capture the ES queries into an APM trace.

It's also possible we're looking at a 7.14 bug here, but I can't recall anything like this offhand.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.