"Looks like you don't have any logging indices"

I'm trying to figure out what constitutes a "logging index" - I'm getting the message " Looks like you don't have any logging indices" when testing the new "Logs" app in Kibana, but I can't find any documentation around what it considers a logging index. Is this feature "locked" to the builtin pattern (filebeat*, logstash*) etc?

Hi @trondhindenes,

the logging ui indeed defaults to using the filebeat-* index pattern created by the default filebeat configuration. Clicking the "Setup Instructions" button brings you to the "Add Data" UI, which can walk you through setting up filebeat. If you already have logs in other indices, you can change the index pattern and fields via the kibana.yml configuration file. The github issue #20428 shows the default configuration, any part of which you can override.

So if your logs were located in indices matching logs-*, you could put the following line into your config file to change it:

xpack.infra.sources.default.logAlias: 'logs-*'

The infrastructure UI is released as a Beta product at this point, so we would be very much interested to hear your feedback and wishlist.

I took the liberty of moving your question to the new #logs sub-forum.

1 Like

Well for starters you could have a select box like Discover that allows you to select the index instead of having to set it in a config file :wink:

I'm actually a little confused because of that. The default log index, Documentation and all "add data" examples are based on *beats.

Doesn't it work with logstash inputs too?

Thanks for responding. This seems to imply that only one 'log pattern' is supported. We have at lest 6 distinctly different index patterns that all contain log-type data.

1 Like

Well for starters you could have a select box like Discover that allows you to select the index instead of having to set it in a config file

Yes, this is definitely planned.

The default log index, Documentation and all "add data" examples are based on *beats. Doesn't it work with logstash inputs too?

It is currently designed to work with filebeat out-of-the box. That said, there is some flexibility if you're willing to change the Kibana configuration file (there will be a UI for that soon as well).

  • The index pattern used to read log events can be changed via the xpack.infra.sources.default.logAlias setting, which can contain any index pattern supported by Elasticsearch.
  • The timestamp and sorting tiebreaker fields can be changed via the xpack.infra.sources.default.fields.timestamp and xpack.infra.sources.default.fields.tiebreaker, respectively.
  • The logic to read the message from the individual documents looks at several fields specific to filebeat modules first, but then falls back to the message and @message fields.

That means no matter what the ingestion pipeline is, as long as it is possible to formulate an index pattern and structure the documents therein such that they contain timestamp and message/@message fields, the Log UI should pick them up, e.g.:

xpack.infra:
  sources:
    default:
      logAlias: 'logstash-*'
      fields:
        timestamp: 'my_timestamp_field'
        tiebreaker: 'line_number_field'

It is currently possible to set the xpack.infra.sources.default.logAlias setting to a compound index pattern such as log-source-a-*,log-source-b-*. The ability to configure and choose between multiple separate log data sources is being worked on.

Nice, thanks for explaining. We'll test the current options, and see how far we get!

2 Likes

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.