Installed x-pack - discover shows no results found

I have (finally) successfully installed x-pack...so it seems. The current issue is I get no results under kibana's discover tab. I have the following alerts:

Medium

17 min ago

Elasticsearch cluster nodes have changed! Node was restarted [1]: [oc-elk].

Elasticsearch Nodes

May 7, 2018 6:49:40 PM

17 min ago
Medium

Not Resolved

Elasticsearch cluster status is yellow. Allocate missing replica shards.

Elasticsearch Indices

May 7, 2018 6:49:39 PM

12 days 23 hrs 59 min ago
Low

Not Resolved

Configuring TLS will be required to apply a Gold or Platinum license when security is enabled. See documentation for details.

General

May 7, 2018 6:49:40 PM

4

Here is the elasticsearch config:

root@oc-elk:~# cat /etc/elasticsearch/elasticsearch.yml | grep -v "#" | awk "NF"
cluster.name: ocs-elk-cluster
node.name: oc-elk
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 10.7.1.61
xpack.license.self_generated.type: trial

found the following under logstash status:

May 07 15:09:44 oc-elk logstash[2103]: Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
May 07 15:09:48 oc-elk logstash[2103]: Elasticsearch Unreachable: [http://logstash_system:xxxxxx@localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)
May 07 15:10:18 oc-elk logstash[2103]: No Available connections
May 07 15:10:48 oc-elk logstash[2103]: No Available connections
May 07 15:11:18 oc-elk logstash[2103]: No Available connections
May 07 15:11:48 oc-elk logstash[2103]: No Available connections
May 07 15:12:18 oc-elk logstash[2103]: No Available connections

In your Logstash configuration,you need to change localhost to 10.7.1.61 as you have configured Elasticsearch to listen to only that IP Address

here is the edited logstash.yml...

root@oc-elk:/etc/logstash# less logstash.yml | grep -v "#" | awk "NF"
network.host: 10.7.1.61
path.data: /var/lib/logstash
path.logs: /var/log/logstash
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: xxxxxxxxxxxxxxxx

...and a log entry after restarting all services...

[2018-05-08T08:44:19,254][FATAL][logstash.runner ] An unexpected error occurred! {:error=>#<ArgumentError: Setting "network.host" hasn't been registered>, :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/settings.rb:37:in get_setting'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:70:inset_value'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:89:in block in merge'", "org/jruby/RubyHash.java:1343:ineach'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:89:in merge'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:138:invalidate_all'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:264:in execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:67:inrun'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:219:in run'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:132:inrun'", "/usr/share/logstash/lib/bootstrap/environment.rb:67:in `'"]}
[2018-05-08T08:44:19,260][ERROR][org.logstash.Logstash ] java.lang.IllegalStateException: org.jruby.exceptions.RaiseException: (SystemExit) exit

network.host: is an elasticsearch.yml configuration option , not a logstash.yml one so you can't just copy paste it there. When I mentioned network.host in my answer, I was just pointing out that you have configured Elasticsearh to listen to 10.7.1.61 so you shouldn't expect it to listen to localhost.

You need to change your Logstash configuration so that it connects to Elasticsearch using the correct IP Address. This is all very well detailed/explained in the documentation and it will definitely speed up your deployment time if you go through it. The setting you need to change/set in logstash.yml is

xpack.monitoring.elasticsearch.url: http://10.7.1.61:9200

thank you - i've added that to the logstash.yml.

I am reviewing the documentation - thanks for the help to this point. I'm still stuck with a logstash issue after adding the following line to logstash.yml:

xpack.monitoring.elasticsearch.url: http://10.7.1.61:9200

May 08 11:53:27 oc-elk logstash[1063]: Elasticsearch Unreachable: [http://logstash_system:xxxxxx@10.7.1.61:9200/][Manticore::SocketException] Connection refused (Connection refused)

[2018-05-08T12:23:40,561][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://10.7.1.61:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'http://10.7.1.61:9200/'"}

Looks like logstash attemtps to connect to Elasticsearh using the wrong credentials.
Are you certain the password you use for

xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: xxxxxxxxxxxxxxxx

is correct? Can you verify this by hitting the _authenticate API directly ?

curl -u logstash_system 'http://10.6.1.61:9200/_xpack/security/_authenticate?pretty'

root@oc-elk:/etc/logstash/conf.d# curl -u logstash_system 'http://10.7.1.61:9200/_xpack/security/_authenticate?pretty'
Enter host password for user 'logstash_system':
{
"username" : "logstash_system",
"roles" : [
"logstash_system"
],
"full_name" : null,
"email" : null,
"metadata" : {
"_reserved" : true
},
"enabled" : true
}

I have that password defined in logstash.yml as this:
xpack.monitoring.elasticsearch.password

Let's look at this from a different angle. What do you expect to see in the Discover tab ? What kind of data are you ingesting in Elasticsearch via Logstash ?

The settings we discuss above are only for monitoring logstash

Have you set up the elasticsearch plugin in order to send some kind of logs to Elasticsearch? What is your configuration file ?

I am using logstash with filebeat, winlogbeat, and metricbeat to send logs and performance stats from windows and linux hosts to ES. Before x-pack I would normally see the timeline of collected logs/stats using the wildcard filter. I have no logs or stats after the installation of x-pack.

configs:

root@oc-elk:/etc/logstash/conf.d# cat /etc/elasticsearch/elasticsearch.yml | grep -v "#" | awk "NF"
cluster.name: ocs-elk-cluster
node.name: oc-elk
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 10.7.1.61
http.port: 9200-9300
xpack.license.self_generated.type: trial
action.auto_create_index: .security,.monitoring*,.watches,.triggered_watches,.watcher-history*,.ml*

root@oc-elk:/etc/logstash/conf.d# cat /etc/logstash/logstash.yml | grep -v "#" | awk "NF"
path.data: /var/lib/logstash
path.logs: /var/log/logstash
xpack.monitoring.elasticsearch.url: http://10.7.1.61:9200
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: xxxxxxxxxxxxxxxxxxxxx

root@oc-elk:/etc/logstash/conf.d# cat /etc/kibana/kibana.yml | grep -v "#" | awk "NF"
server.host: "10.7.1.61"
server.name: "oc-elk"
elasticsearch.url: "http://10.7.1.61:9200"
elasticsearch.username: "kibana"
elasticsearch.password: "xxxxxxxxxxxxxxxxx"

beats.conf

input{
beats{
port => "5043"
}
}

filter {
if [fields][logtype] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
}
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}

if [fields][logtype] == "iis_log" {
    if [message] =~ "^#" {
        drop {}
    }

    grok {
        match => { "message" => [ "%{TIMESTAMP_ISO8601:log_timestamp} %{WORD:iisSite} %{IPORHOST:site} %{WORD:method} %{URIPATH:page} %{NOTSPACE:querystring} %{NUMBER:port} %{NOTSPACE:username} %{IPORHOST:clienthost} %{NOTSPACE:useragent} %{NOTSPACE:referer}" ] }
    }

}

}

output{
elasticsearch {
hosts => ["10.7.1.61:9200"]
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}

output{
elasticsearch {
hosts => ["10.7.1.61:9200"]
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}

You also need to update this so that logstash can authenticate itself to Elasticsearch. The suggested way of doing so is to create a new user with the appropriate role and use that for the Elasticsearch output plugin. This is all documented in this section which you can use as is for your config in order to

  • Create the role
  • Create the user and assign it the role
  • add the necessary
    user => logstash_internal
    password => xxxxxxxxxxxx
    
    section to your Elasticsearch output configuration.

Does that new user need to be included in the input and filter portions of this beats.conf as well?

input{
beats{
port => "5043"
}
}

filter {
if [fields][logtype] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
}
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}

if [fields][logtype] == "iis_log" {
    if [message] =~ "^#" {
        drop {}
    }

    grok {
        match => { "message" => [ "%{TIMESTAMP_ISO8601:log_timestamp} %{WORD:iisSite} %{IPORHOST:site} %{WORD:method} %{URIPATH:page} %{NOTSPACE:querystring} %{NUMBER:port} %{NOTSPACE:username} %{IPORHOST:clienthost} %{NOTSPACE:useragent} %{NOTSPACE:referer}" ] }
    }

}

}

output{
elasticsearch {
hosts => ["10.7.1.61:9200"]
user => logstash_internal
password => xxxxxxxxxxxxx
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}

No, the credentials are required in order to communicate with Elasticsearch, they are not needed elsewhere in your pipeline.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.