Kibana - Stack Monitoring - No structured logs found

sure does.

{
  "filebeat-7.8.0-elasticsearch-server-pipeline-json" : {
    "description" : "Pipeline for parsing the Elasticsearch server log file in JSON format.",
    "on_failure" : [
      {
        "set" : {
          "field" : "error.message",
          "value" : "{{ _ingest.on_failure_message }}"
        }
      }
    ],
    "processors" : [
      {
        "json" : {
          "field" : "message",
          "target_field" : "elasticsearch.server"
        }
      },
      {
        "drop" : {
          "if" : "ctx.elasticsearch.server.type != 'server'"
        }
      },
      {
        "remove" : {
          "field" : "elasticsearch.server.type"
        }
      },
      {
        "dot_expander" : {
          "field" : "service.name",
          "path" : "elasticsearch.server"
        }
      },
      {
        "rename" : {
          "field" : "elasticsearch.server.service.name",
          "target_field" : "service.name",
          "ignore_missing" : true
        }
      },
      {
        "rename" : {
          "ignore_missing" : true,
          "field" : "elasticsearch.server.component",
          "target_field" : "elasticsearch.component"
        }
      },
      {
        "dot_expander" : {
          "field" : "cluster.name",
          "path" : "elasticsearch.server"
        }
      },
      {
        "rename" : {
          "target_field" : "elasticsearch.cluster.name",
          "field" : "elasticsearch.server.cluster.name"
        }
      },
      {
        "dot_expander" : {
          "field" : "node.name",
          "path" : "elasticsearch.server"
        }
      },
      {
        "rename" : {
          "field" : "elasticsearch.server.node.name",
          "target_field" : "elasticsearch.node.name"
        }
      },
      {
        "dot_expander" : {
          "field" : "cluster.uuid",
          "path" : "elasticsearch.server"
        }
      },
      {
        "rename" : {
          "ignore_missing" : true,
          "field" : "elasticsearch.server.cluster.uuid",
          "target_field" : "elasticsearch.cluster.uuid"
        }
      },
      {
        "dot_expander" : {
          "field" : "node.id",
          "path" : "elasticsearch.server"
        }
      },
      {
        "rename" : {
          "field" : "elasticsearch.server.node.id",
          "target_field" : "elasticsearch.node.id",
          "ignore_missing" : true
        }
      },
      {
        "rename" : {
          "target_field" : "log.level",
          "ignore_missing" : true,
          "field" : "elasticsearch.server.level"
        }
      },
      {
        "dot_expander" : {
          "field" : "log.level",
          "path" : "elasticsearch.server"
        }
      },
      {
        "rename" : {
          "target_field" : "log.level",
          "ignore_missing" : true,
          "field" : "elasticsearch.server.log.level"
        }
      },
      {
        "dot_expander" : {
          "field" : "log.logger",
          "path" : "elasticsearch.server"
        }
      },
      {
        "rename" : {
          "field" : "elasticsearch.server.log.logger",
          "target_field" : "log.logger",
          "ignore_missing" : true
        }
      },
      {
        "dot_expander" : {
          "field" : "process.thread.name",
          "path" : "elasticsearch.server"
        }
      },
      {
        "rename" : {
          "field" : "elasticsearch.server.process.thread.name",
          "target_field" : "process.thread.name",
          "ignore_missing" : true
        }
      },
      {
        "grok" : {
          "patterns" : [
            "%{GC_ALL}",
            "%{GC_YOUNG}",
            "((\\[%{INDEXNAME:elasticsearch.index.name}\\]|\\[%{INDEXNAME:elasticsearch.index.name}\\/%{DATA:elasticsearch.index.id}\\]))?%{SPACE}%{GREEDYMULTILINE:message}"
          ],
          "field" : "elasticsearch.server.message",
          "pattern_definitions" : {
            "GC_YOUNG" : """\[gc\]\[young\]\[%{NUMBER:elasticsearch.server.gc.young.one}\]\[%{NUMBER:elasticsearch.server.gc.young.two}\]%{SPACE}%{GREEDYMULTILINE:message}""",
            "GREEDYMULTILINE" : """(.|
)*""",
            "INDEXNAME" : "[a-zA-Z0-9_.-]*",
            "GC_ALL" : """\[gc\]\[%{NUMBER:elasticsearch.server.gc.overhead_seq}\] overhead, spent \[%{NUMBER:elasticsearch.server.gc.collection_duration.time:float}%{DATA:elasticsearch.server.gc.collection_duration.unit}\] collecting in the last \[%{NUMBER:elasticsearch.server.gc.observation_duration.time:float}%{DATA:elasticsearch.server.gc.observation_duration.unit}\]"""
          }
        }
      },
      {
        "remove" : {
          "field" : "elasticsearch.server.message"
        }
      },
      {
        "rename" : {
          "field" : "elasticsearch.server.@timestamp",
          "target_field" : "@timestamp",
          "ignore_missing" : true
        }
      },
      {
        "rename" : {
          "target_field" : "@timestamp",
          "ignore_missing" : true,
          "field" : "elasticsearch.server.timestamp"
        }
      },
      {
        "date" : {
          "ignore_failure" : true,
          "field" : "@timestamp",
          "target_field" : "@timestamp",
          "formats" : [
            "ISO8601"
          ]
        }
      }
    ]
  }
}

Hmm.

Try running filebeat in debug mode (-d "processors") and verify you see the appropriate pipeline being applied:

2020-08-12T09:42:05.671-0400	DEBUG	[processors]	processing/processors.go:187	Publish event: {
  "@timestamp": "2020-08-12T13:42:00.670Z",
  "@metadata": {
    "beat": "filebeat",
    "type": "_doc",
    "version": "8.0.0",
    "pipeline": "filebeat-8.0.0-elasticsearch-server-pipeline"
  },

Since you are ingesting the .json logs, it should hit this part of that pipeline:

      {
        "pipeline" : {
          "name" : "filebeat-8.0.0-elasticsearch-server-pipeline-json",
          "if" : "ctx.first_char == '{'"
        }
      },

which should trigger the json pipeline.

Let's reverify all these components are in place before debugging more.

Thanks

Just an observation: I had the same issue, with pipelines and everything in place, and it turned out that there were simply no log entries in the selected time range (this is 7.8.1, btw).

Previously Kibana told one to check the selected time range when no log entries were found, but it seems this is gone now and was replaced with the structured log message.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.