Unable to create a new index pattern / indexes not loading

Problem: When trying to create a new index pattern the error "The index pattern you've entered does not match any indices". Really the problem is my index's will not load from a config / schema.

Index Details: This index is associated with the schema and config file which are located in /etc/logstash/conf.d/rjdns.conf, and /etc/logstash/rjdns.json.

Trouble shooting:

  • Tried checking Stack Management > Index Management, index does not show. Tried "Reload indicies", nothing.
  • Tried to create the index under "Index Mangement", hence the problem above, the "next step" is greyed out when trying to use "rjdns*"
  • Checked syslog, I keep seeing this error but not sure if related to my problem or for some other file, there is no context to what is causing this. Anyway I can tell?
[2020-12-23T15:34:52,994][ERROR][logstash.agent           ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \\t\\r\\n], \"#\", \"if\", [A-Za-z0-9_-], '\"', \"'\", \"}\" at line 6, column 1 (byte 41) after input {\n  beats {\n    port => 5044\n  }\n\n", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:58:in `compile_imperative'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:66:in `compile_graph'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:28:in `block in compile_sources'", "org/jruby/RubyArray.java:2577:in `map'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:27:in `compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:181:in `initialize'", "org/logstash/execution/JavaBasePipelineExt.java:67:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:43:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:52:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:342:in `block in converge_state'"]
  • Tried running the following commands:
    • /usr/share/logstash/bin/logstash --config.test_and_exit -f conf.d/rjdns.conf
      after running I get "Configuration OK"

    • /usr/share/logstash/bin/logstash --debug -f rjflow-schema.json after running I get the following (Only including Errors and warnings):

WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path 
[WARN ] 2020-12-23 15:32:09.959 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[ERROR] 2020-12-23 15:32:10.374 [Agent thread] sourceloader - No configuration found in the configured sources.

Here is the config file:

input {
  file {
    path => "/etc/logstash/redjack-data/sensor-dnstap/*.json"
    mode => "read"
    codec => "json"
    exit_after_read => true
  }
}
output {
  elasticsearch {
    hosts => ["10.10.10.10:9200"]
    index => "rjdns-%{+yyyy.MM.dd}"
    manage_template => true
    template => "/etc/logstash/rjdns-schema.json"
    template_name => "rjdns_template"
  }
}

Really not sure how to trouble shoot the problem at this point. Why wont Elastic load my config / schema as an index? Is this a permissions issue? Is there anyway I can check some log or debug to tell me why the index wont be recognized?

That error indicates that logstash is not executing the pipeline, so that logstash instance is not going to index anything. However, that configuration file has a beats input, not a file input. Could it be that someone else is running logstash, or are you not running the configuration file that you think you are?

How can I tell if someone else is running logstash?
How can I tell I am running the correct configuration file?

ExecStart for the logstash.service unit shows the following:
ExecStart=/usr/share/logstash/bin/logstash "--path.settings" "/etc/logstash"

Add --log.level debug --config.debug to the command line. Look for a message like

[logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/home/user/test.conf"}

I ran the following command:
/usr/share/logstash/bin/logstash --log.level debug --config.debug

I see this error

ERROR: Failed to read pipelines yaml file. Location: /usr/share/logstash/config/pipelines.yml
usage:
  bin/logstash -f CONFIG_PATH [-t] [-r] [] [-w COUNT] [-l LOG]
  bin/logstash --modules MODULE_NAME [-M "MODULE_NAME.var.PLUGIN_TYPE.PLUGIN_NAME.VARIABLE_NAME=VALUE"] [-t] [-w COUNT] [-l LOG]
  bin/logstash -e CONFIG_STR [-t] [--log.level fatal|error|warn|info|debug|trace] [-w COUNT] [-l LOG]
  bin/logstash -i SHELL [--log.level fatal|error|warn|info|debug|trace]
  bin/logstash -V [--log.level fatal|error|warn|info|debug|trace]
  bin/logstash --help
[ERROR] 2020-12-23 17:39:04.006 [LogStash::Runner] Logstash - java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit

Am I missing a setting somewhere? I did a locate on the piplines yaml and found it in /etc/logstash/pipelines.yml

If your service runs with "--path.settings" "/etc/logstash" you need that on the command line too.

I added that parameter. I see a lot of output.
I see this line along with other config files
[2020-12-23T18:05:33,840][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/conf.d/rjdns.conf"}

Further down it shows the actual configs being read to stdout. So I assume its seeing the config in question + others.

At the end of the output I still see the [ERROR] about the failed to execute action of the pipeline that you mentioned above.
Again this error is not allowing new indexes to be loaded? How can I tell what is causing this error?

Can you show us the debug message that logs the configuration itself, along with the error message that says it cannot be parsed?

Is that not the message I posted before?
Here is all of the debug messages for the config files from the output.

[2020-12-23T18:05:33,811][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/conf.d/02-beats-input.conf"}
[2020-12-23T18:05:33,815][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/conf.d/10-syslog-filter.conf"}
[2020-12-23T18:05:33,823][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/conf.d/30-elasticsearch-output.conf"}
[2020-12-23T18:05:33,830][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/conf.d/bak_redjack.conf"}
[2020-12-23T18:05:33,835][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/conf.d/json-drop.conf"}
[2020-12-23T18:05:33,837][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/conf.d/json-read.conf"}
[2020-12-23T18:05:33,838][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/conf.d/redjack.conf"}
[2020-12-23T18:05:33,840][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/conf.d/rjdns.conf"}
[2020-12-23T18:05:33,866][DEBUG][logstash.config.pipelineconfig] -------- Logstash Config ---------
[2020-12-23T18:05:33,878][DEBUG][logstash.config.pipelineconfig] Config from source {:source=>LogStash::Config::Source::MultiLocal, :pipeline_id=>:main}
[2020-12-23T18:05:33,882][DEBUG][logstash.config.pipelineconfig] Config string {:protocol=>"file", :id=>"/etc/logstash/conf.d/02-beats-input.conf"}

here is the actual section that shows on stdout of the config file in question

[2020-12-23T18:05:33,926][DEBUG][logstash.config.pipelineconfig] Config string {:protocol=>"file", :id=>"/etc/logstash/conf.d/rjdns.conf"}
[2020-12-23T18:05:33,927][DEBUG][logstash.config.pipelineconfig]

input {
  file {
    path => "/etc/logstash/redjack-data/sensor-dnstap/*.json"
    mode => "read"
    codec => "json"
    exit_after_read => true
  }
}
output {
  elasticsearch {
    hosts => ["10.10.10.10:9200"]
    index => "rjdns-%{+yyyy.MM.dd}"
    manage_template => true
    template => "/etc/logstash/rjdns-schema.json"
    template_name => "rjdns_template"
  }
}

[2020-12-23T18:05:33,941][DEBUG][logstash.config.pipelineconfig] Merged config
[2020-12-23T18:05:33,944][DEBUG][logstash.config.pipelineconfig]

And here is the section with the error and when it dies.

20-12-23T18:05:34,084][DEBUG][logstash.agent           ] Converging pipelines state {:actions_count=>1}
[2020-12-23T18:05:34,101][DEBUG][logstash.agent           ] Executing action {:action=>LogStash::PipelineAction::Create/pipeline_id:main}
[2020-12-23T18:05:34,344][ERROR][logstash.agent           ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \\t\\r\\n], \"#\", \"if\", [A-Za-z0-9_-], '\"', \"'\", \"}\" at line 6, column 1 (byte 41) after input {\n  beats {\n    port => 5044\n  }\n\n", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:58:in `compile_imperative'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:66:in `compile_graph'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:28:in `block in compile_sources'", "org/jruby/RubyArray.java:2577:in `map'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:27:in `compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:181:in `initialize'", "org/logstash/execution/JavaBasePipelineExt.java:67:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:43:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:52:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:342:in `block in converge_state'"]}
[2020-12-23T18:05:34,404][DEBUG][logstash.agent           ] Starting puma
[2020-12-23T18:05:34,409][DEBUG][logstash.instrument.periodicpoller.os] Stopping
[2020-12-23T18:05:34,417][DEBUG][logstash.instrument.periodicpoller.jvm] Stopping
[2020-12-23T18:05:34,426][DEBUG][logstash.instrument.periodicpoller.persistentqueue] Stopping
[2020-12-23T18:05:34,428][DEBUG][logstash.instrument.periodicpoller.deadletterqueue] Stopping
[2020-12-23T18:05:34,436][DEBUG][logstash.agent           ] Shutting down all pipelines {:pipelines_count=>0}
[2020-12-23T18:05:34,440][DEBUG][logstash.agent           ] Trying to start WebServer {:port=>9600}
[2020-12-23T18:05:34,447][DEBUG][logstash.agent           ] Converging pipelines state {:actions_count=>0}
[2020-12-23T18:05:34,475][DEBUG][logstash.api.service     ] [api-service] start
[2020-12-23T18:05:34,547][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2020-12-23T18:05:39,611][INFO ][logstash.runner          ] Logstash shut down.

Let me know if I am missing anything else.

I am completely unable to think of any explanation of how

[2020-12-23T18:05:33,927][DEBUG][logstash.config.pipelineconfig]

input {
  file {
    path => "/etc/logstash/redjack-data/sensor-dnstap/*.json"
    mode => "read"
    codec => "json"
    exit_after_read => true
  }...

could result in the error message

:message=>"Expected one of [ \t\r\n], "#", "if", [A-Za-z0-9_-], '"', "'", "}" at line 6, column 1 (byte 41) after input {\n beats {\n port => 5044\n }\n\n",

Absolutely inexplicable. The configuration clearly shows a file input, but the error message shows a beats input.

Sounds like something I am doing wrong.
Could something else causing that alarm?

Note: I just moved the config file out of conf.d, and ran the debug again. The other config files are loaded and the error still appears.

Is there anyway to pinpoint what is causing that error? the traceback is all system files...

What do you mean "the other config files". Do you have a config file that has a beats input? If so, what does it look like?

Here are other config files in conf.d

root@ec-elk:/etc/logstash/conf.d# ls -las
total 40
4 drwxrwxr-x 2 root root 4096 Dec 23 19:18 .
4 drwxrwxr-x 4 root root 4096 Dec 23 19:18 ..
4 -rw-r--r-- 1 root root   40 Jul  5 18:32 02-beats-input.conf
4 -rw-r--r-- 1 root root 3148 Jul  5 18:34 10-syslog-filter.conf
4 -rw-r--r-- 1 root root  169 Jul  5 18:34 30-elasticsearch-output.conf
4 -rw-r--r-- 1 root root  488 Nov 19 18:41 bak_redjack.conf
4 -rw-r--r-- 1 root root 1052 Dec 23 15:25 error_output
4 -rw-r--r-- 1 root root  424 Jul 15 13:50 json-drop.conf
4 -rw-r--r-- 1 root root  285 Jul 15 13:45 json-read.conf
4 -rw-r--r-- 1 root root  600 Nov 19 19:14 redjack.conf

Think the only ones we created were bak_redjack.conf and redjack.conf

root@ec-elk:/etc/logstash/conf.d# cat bak_redjack.conf
input {
  file {
    path => "/etc/logstash/redjack-data/att-redjacksensor1/*.json.gz"
    mode => "read"
    codec => "json"
  }
}
filter{
  mutate {
    remove_field => "[metadata][src_host_names]"
    remove_field => "[metadata][dst_host_names]"
  }
}
output {
  elasticsearch {
        hosts => ["localhost:9200"]
        index => "rjflow-%{+yyyy.MM.dd}"
        manage_template => true
        template => "/etc/logstash/rjflow-schema.json"
        template_name => "rjflow_template"
  }
}
root@ec-elk:/etc/logstash/conf.d# cat redjack.conf
input {
  file {
    path => "/etc/logstash/redjack-data/att-redjacksensor1/*.json.gz"
    mode => "read"
    codec => "json"
  }
}
  file {
    path => "/etc/logstash/redjack-data/sensor/*.json.gz"
    mode => "read"
    codec => "json"
  }
}
filter{
  mutate {
    remove_field => "[metadata][src_host_names]"
    remove_field => "[metadata][dst_host_names]"
  }
}
output {
  elasticsearch {
        hosts => ["localhost:9200"]
        index => "rjflow-%{+yyyy.MM.dd}"
        manage_template => true
        template => "/etc/logstash/rjflow-schema.json"
        template_name => "rjflow_template"
  }
}

Could they be conflicting with each other or with rjdns.conf?

Can you show me 02-beats-input.conf and 10-syslog-filter.conf?

If you point path.config (which you may be setting in logstash.yml) at a directory, then it will sort them alphabetically, concatenate them, and run them all in a single pipeline. Events will be read from every input defined in any of the files, passed through every filter, and written to every output.

It is very common for less experienced hands to misunderstand that, and expect each file to be a self-contained unit. That is only true if you use pipelines.yml to isolate them.

Here are the files:

02-beats-input.conf

root@ec-elk:/etc/logstash/conf.d# cat 02-beats-input.conf
input {
  beats {
    port => 5044
  }

10-syslog-filter.conf

filter {
  if [fileset][module] == "system" {
    if [fileset][name] == "auth" {
      grok {
        match => { "message" => ["%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:\[%{POSINT:[system][auth][pid]}\])?: %{DATA:[system][auth][ssh][event]} %{DATA:[system][auth][ssh][method]} for (invalid user )?%{DATA:[system][auth][user]} from %{IPORHOST:[system][auth][ssh][ip]} port %{NUMBER:[system][auth][ssh][port]} ssh2(: %{GREEDYDATA:[system][auth][ssh][signature]})?",
                  "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:\[%{POSINT:[system][auth][pid]}\])?: %{DATA:[system][auth][ssh][event]} user %{DATA:[system][auth][user]} from %{IPORHOST:[system][auth][ssh][ip]}",
                  "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:\[%{POSINT:[system][auth][pid]}\])?: Did not receive identification string from %{IPORHOST:[system][auth][ssh][dropped_ip]}",
                  "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sudo(?:\[%{POSINT:[system][auth][pid]}\])?: \s*%{DATA:[system][auth][user]} :( %{DATA:[system][auth][sudo][error]} ;)? TTY=%{DATA:[system][auth][sudo][tty]} ; PWD=%{DATA:[system][auth][sudo][pwd]} ; USER=%{DATA:[system][auth][sudo][user]} ; COMMAND=%{GREEDYDATA:[system][auth][sudo][command]}",
                  "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} groupadd(?:\[%{POSINT:[system][auth][pid]}\])?: new group: name=%{DATA:system.auth.groupadd.name}, GID=%{NUMBER:system.auth.groupadd.gid}",
                  "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} useradd(?:\[%{POSINT:[system][auth][pid]}\])?: new user: name=%{DATA:[system][auth][user][add][name]}, UID=%{NUMBER:[system][auth][user][add][uid]}, GID=%{NUMBER:[system][auth][user][add][gid]}, home=%{DATA:[system][auth][user][add][home]}, shell=%{DATA:[system][auth][user][add][shell]}$",
                  "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} %{DATA:[system][auth][program]}(?:\[%{POSINT:[system][auth][pid]}\])?: %{GREEDYMULTILINE:[system][auth][message]}"] }
        pattern_definitions => {
          "GREEDYMULTILINE"=> "(.|\n)*"
        }
        remove_field => "message"
      }
      date {
        match => [ "[system][auth][timestamp]", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
      }
      geoip {
        source => "[system][auth][ssh][ip]"
        target => "[system][auth][ssh][geoip]"
      }
    }
    else if [fileset][name] == "syslog" {
      grok {
        match => { "message" => ["%{SYSLOGTIMESTAMP:[system][syslog][timestamp]} %{SYSLOGHOST:[system][syslog][hostname]} %{DATA:[system][syslog][program]}(?:\[%{POSINT:[system][syslog][pid]}\])?: %{GREEDYMULTILINE:[system][syslog][message]}"] }
        pattern_definitions => { "GREEDYMULTILINE" => "(.|\n)*" }
        remove_field => "message"
      }
      date {
        match => [ "[system][syslog][timestamp]", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
      }
    }
  }
}

OK, so you are missing a } at the end of 02-beats-input.conf, so logstash thinks you are trying to load an input called filter, and you cannot have a conditional inside an input.

As I said, I think you have a common misunderstanding about how configuration files are combined. Once you add the } and have the pipeline loaded my guess is that it will not do what you want.

I made the adjustment to 02-beats-input.conf.
Still not able to see the index, and the error still appears.
I obviously need to read more of the documentation, I feel like I am missing something.
Any document / link you recommend I follow on how to load a config, schema, and get the index to be recognized?

Also thanks for trying to help. If you think of anything else feel free to reply some more.

@Badger
I was able to get the index to load. Far from figuring out why this happened in the first place.
First of all I stripped out any other .conf files and .json from /etc/logstash and /etc/logstash/conf.d.
Then I rebooted, and noticed that /var/log/syslog was complaining about not being able to connect to localhost:9200. Tested with a CURL and sure enough could not connect. netstat -tulpn showed 9200 was only listening on the server network interface, 10.10.10.10, not localhost/127.0.0.1.
I did a grep -nri localhost in /etc/logstash/conf.d/, and adjusted all files to use the external interface instead of localhost.
Restarted logstash, while tail -f /var/log/syslog, and I can see all the data flooding in.
Looking at Stack Management > Index Management, I can finally see the rjdata index.

Is there a way to allow Elasticsearch listen on multiple interfaces? Would like to have it listen on localhost and the external interface, as well as allow all configs to know they can also use localhost and the external interface. I read this post (https://stackoverflow.com/questions/20222093/elasticsearch-listen-to-multiple-ips), but i do not see a network.bind_host value.

This is our current config:

# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 10.10.10.10
#network.host: localhost
#
# Set a custom port for HTTP:
#
http.port: 9200
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: ["10.10.10.10"]
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["10.10.10.10"]
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------

As far as I know this was enabled a long time ago, but I am unable to test it.

Try to use this:

network.host: 0.0.0.0

Then check the ip that hold the elasticsearch port using

netstat -an | grep 9200

I hope you see 127.0.0.1 and 10.10.10.10...