No text field to input a default index pattern

Hello,

I am running my ELK stack on Ubuntu Server 18.04 LTS (Bionic Beaver). The 3 ELK services and nginx are running. When accessing the kibana management page I am unable to type in an index pattern. I see no text box to input my default index pattern. Below is a picture. Any help would be greatly appreciated.

It says that it can't find any data in Elasticsearch. Clicking the "Learn how" link should provide information on how to index data into Elasticsearch. Once you have data in Elasticsearch you can use Kibana.

I had setup WinLogBeats on my remote PC but it was not working, i.e, I was unable to create an index pattern in Kibana.

Looking at the server I found that even though in the file "02-beats-input-conf" has the port defined as 5044, the server does not seem to be listening on this port.

Here is what I hope to be some relavent log files...

[2018-06-14T00:03:30,858][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.2.4"}
[2018-06-14T00:03:31,104][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2018-06-14T00:03:31,352][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of #, {, } at l$
[2018-06-14T00:03:59,922][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[2018-06-14T00:03:59,938][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}

Index patterns in Kibana are based on the indices within Elasticsearch. The fact that there are none in ES means that WinLogBeats is not pushing the data into your Elasticsearch cluster. If there is no data in Elasticsearch, there is not much we can do in Kibana.

Looking at the logs, I am wondering if you have a syntax error in your configuration file. You might want to post on the Logstash group for some assistance.

My Beats 02-inputs.conf file...

input
{
   beats
   {
          port => 5044
   }
}

My logstash.yml config file....

#   - name: MODULE_NAME
#     var.PLUGINTYPE1.PLUGINNAME1.KEY1: VALUE
#     var.PLUGINTYPE1.PLUGINNAME1.KEY2: VALUE
#     var.PLUGINTYPE2.PLUGINNAME1.KEY1: VALUE
#     var.PLUGINTYPE3.PLUGINNAME3.KEY1: VALUE
#
# Module variable names must be in the format of
#
# var.PLUGIN_TYPE.PLUGIN_NAME.KEY
#
# modules:
#
# ------------ Cloud Settings ---------------
# Define Elastic Cloud settings here.
# Format of cloud.id is a base64 value e.g. dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRub3RhcmVhbCRpZGVudGlmaWVy
# and it may have an label prefix e.g. staging:dXMtZ...
# This will overwrite 'var.elasticsearch.hosts' and 'var.kibana.host'
# cloud.id: <identifier>
#
# Format of cloud.auth is: <user>:<pass>
# This is optional
# If supplied this will overwrite 'var.elasticsearch.username' and 'var.elasticsearch.password'
# If supplied this will overwrite 'var.kibana.username' and 'var.kibana.password'
# cloud.auth: elastic:<password>
#
# ------------ Queuing Settings --------------
#
# Internal queuing model, "memory" for legacy in-memory based queuing and
# "persisted" for disk-based acked queueing. Defaults is memory
#
# queue.type: memory
#
# If using queue.type: persisted, the directory path where the data files will be stored.
# Default is path.data/queue
#
# path.queue:
#
# If using queue.type: persisted, the page data files size. The queue data consists of
# append-only data files separated into pages. Default is 64mb
#
# queue.page_capacity: 64mb
#
# If using queue.type: persisted, the maximum number of unread events in the queue.
# Default is 0 (unlimited)
#
# queue.max_events: 0
#
# If using queue.type: persisted, the total capacity of the queue in number of bytes.
# If you would like more unacked events to be buffered in Logstash, you can increase the
# capacity using this setting. Please make sure your disk drive has capacity greater than
# the size specified here. If both max_bytes and max_events are specified, Logstash will pick
# whichever criteria is reached first
# Default is 1024mb or 1gb
#
# queue.max_bytes: 1024mb
#
# If using queue.type: persisted, the maximum number of acked events before forcing a checkpoint
# Default is 1024, 0 for unlimited
#
# queue.checkpoint.acks: 1024
#
# If using queue.type: persisted, the maximum number of written events before forcing a checkpoint
# Default is 1024, 0 for unlimited
#
# queue.checkpoint.writes: 1024
#
# If using queue.type: persisted, the interval in milliseconds when a checkpoint is forced on the head page
# Default is 1000, 0 for no periodic checkpoint.
#
# queue.checkpoint.interval: 1000
#
# ------------ Dead-Letter Queue Settings --------------
# Flag to turn on dead-letter queue.
#
# dead_letter_queue.enable: false

# If using dead_letter_queue.enable: true, the maximum size of each dead letter queue. Entries
# will be dropped if they would increase the size of the dead letter queue beyond this setting.
# Default is 1024mb
# dead_letter_queue.max_bytes: 1024mb

# If using dead_letter_queue.enable: true, the directory path where the data files will be stored.
# Default is path.data/dead_letter_queue
#
# path.dead_letter_queue:
#
# ------------ Metrics Settings --------------
#
# Bind address for the metrics REST endpoint
#
# http.host: "127.0.0.1"
#
# Bind port for the metrics REST endpoint, this option also accept a range
# (9600-9700) and logstash will pick up the first available ports.
#
# http.port: 9600-9700
#
# ------------ Debugging Settings --------------
#
# Options for log.level:
#   * fatal
#   * error
#   * warn
#   * info (default)
#   * debug
#   * trace
#
# log.level: info
path.logs: /var/log/logstash

All the big text is commented out in the yml file

In your beats config, have you provided the IP/host to the Logstash server? Can you verify that Logstash is getting the data? Can you look at the beats logs? If it's having an issue connecting to the remote host it should complain.

In the beats config that I pasted into the top of the other post, I had tried it like below with no luck.

input
{
   beats
   {
      host => 10.1.0.248
      port => 5044
   }
}

Where are the beats logs on the server?

Here is the documentation for getting started with WinLog beat.

https://www.elastic.co/guide/en/beats/winlogbeat/current/winlogbeat-getting-started.html

Is there processing you are requiring with Logstash? One thought is to temporarily ship directly to Elasticsearch to simplify the architecture until you have it working.

I ended up having to run the upgrade commands to get the latest releases and once I did that things just kind of fell into place. Thanks for your assistance Tyler.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.