Alert in single node

I'm using Machine learning 7.13 and planning to create alert rule for warning.

But I saw this
image

After searching I found these steps to enable TLS. But I saw in Set up minimal security for Elasticsearch | Elasticsearch Guide [7.13] | Elastic wrote

If you’re running a single-node cluster, then you can stop here.

If your cluster has multiple nodes, then you must configure Transport Layer Security (TLS) between nodes. Production mode clusters will not start if you do not enable TLS.

And my ELK is a single node (I have discovery.type: single-node in my config file). So I need to configure TLS or not?
When I add xpack.security.enabled: true why my ELK can't receive log anymore?

This's my elasticsearch.yml

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
node.roles: [master, data, ml, remote_cluster_client ]
xpack.ml.enabled: true
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: 192.168.186.157
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
discovery.type: single-node
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
xpack.security.enabled: true

H @quyennguyen

Yes you are going to need TLS between Elasticsearch and Kibana as well as the encryption key to use Alerts.

If interested I put together a How To for a Fully Secure Single Node ES + Kibana here if you are interested.

Also shows what you will need to set in logstash as well for secure communications.

1 Like

I saw in the link you send it has ./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 --ip 10.168.0.71,127.0.0.1 --dns hostname,localhost . In my situation, I need to change to ./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 --ip 192.168.186.157 , about --dns I'm not sure how to set up? Cause I didn't set up elk in localhost and domain.

the --dns flag just adds that hostname and localhost to the cert which can be important

If you do not refer to the server hostname (only IPs) I would then just leave it as which means localhost is a valid hostname in the cert

--dns localhost

by definition localhost is probably available unless you have no loopback at all.

If so you can take it out too, if you only use IPs

It has some error when I install

root@ubuntu:/usr/share/kibana# bin/kibana-keystore --allow-root create

  Usage: bin/kibana-keystore [command] [options]
  
  A tool for managing settings stored in the Kibana keystore
  
  Commands:
    create  [options]       Creates a new Kibana keystore
    list  [options]         List entries in the keystore
    add  [options] <key>    Add a string setting to the keystore
    remove  [options] <key> Remove a setting from the keystore
    help  <command>         get the help for a specific command

root@ubuntu:/usr/share/kibana# /usr/share/kibana/bin/kibana-keystore --allow-root add elasticsearch.password

  Usage: bin/kibana-keystore [command] [options]
  
  A tool for managing settings stored in the Kibana keystore
  
  Commands:
    create  [options]       Creates a new Kibana keystore
    list  [options]         List entries in the keystore
    add  [options] <key>    Add a string setting to the keystore
    remove  [options] <key> Remove a setting from the keystore
    help  <command>         get the help for a specific command

Do I need fix that command by delete the word --allow-root ?
Fix to

/usr/share/kibana/bin/kibana-keystore create
/usr/share/kibana/bin/kibana-keystore add elasticsearch.password

, right?

Ahh looks like we recently took that out... so yes take that out I will update my walk through.

How are you starting Kibana... you may need to chmod the kibana.keystore

Not here

All commands here should be run as the user which will run Kibana.

So you will probably need to run

sudo chown kibana:kibana /var/lib/kibana/kibana.keystore

Got it.
And in output logstash you get

cacert => "/etc/pki/root/selfca.pem"

which I didn't have.
image

Do I need to create one and copy selfca.pem from /etc/elasticsearch/certs/selfca.pem ?

And my kibana.keystore is in /etc/kibana, I'll change command to sudo chown kibana:kibana /etc/kibana/kibana.keystore.

Yes you do ... they are in my steps... :slight_smile:

Well how are you starting kibana but yes...

Hm, I didn't see the step create file /etc/pki/root and copy in that. Maybe you need to take a look at that?

Btw, I got some error when restart logstash, don't understand why

[2021-08-17T04:43:20,236][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://192.168.186.157:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://192.168.186.157:9200/][Manticore::ClientProtocolException] 192.168.186.157:9200 failed to respond"}
[2021-08-17T04:43:21,016][DEBUG][logstash.outputs.elasticsearch][main][37ea45727fd4b5941ba95bf7c2107c136d635dfb690bd774e831c20de9eb4e0c] Sending final bulk request for batch. {:action_count=>5, :payload_size=>6471, :content_length=>6471, :batch_offset=>0}
[2021-08-17T04:43:21,243][ERROR][logstash.outputs.elasticsearch][main][37ea45727fd4b5941ba95bf7c2107c136d635dfb690bd774e831c20de9eb4e0c] Attempted to send a bulk request but there are no living connections in the pool (perhaps Elasticsearch is unreachable or down?) {:message=>"No Available connections", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError, :will_retry_in_seconds=>64}
[2021-08-17T04:43:23,465][DEBUG][logstash.outputs.elasticsearch][main] Running health check to see if an ES connection is working {:url=>"http://192.168.186.157:9200/", :path=>"/"}
[2021-08-17T04:43:23,473][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://192.168.186.157:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://192.168.186.157:9200/][Manticore::ClientProtocolException] 192.168.186.157:9200 failed to respond"}
[2021-08-17T04:43:28,482][DEBUG][logstash.outputs.elasticsearch][main] Running health check to see if an ES connection is working {:url=>"http://192.168.186.157:9200/", :path=>"/"}

How can I fix that?

I send you my config file

kibana.yml

# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "0.0.0.0"

# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["https://127.0.0.1:9200"]

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
elasticsearch.username: "kibana"
#elasticsearch.password: "*******" #

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
server.ssl.enabled: true
server.ssl.certificate: /etc/kibana/certs/kibana.pem
server.ssl.key: /etc/kibana/certs/kibana.key

# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
elasticsearch.ssl.certificateAuthorities: [ "/etc/kibana/certs/selfca.pem" ]

xpack.encryptedSavedObjects.encryptionKey: "169a4ad9e20919bbc3dabd354a131f5c"

elasticsearch.yml

node.roles: [master, data, ml, remote_cluster_client ]
xpack.ml.enabled: true
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: [192.168.186.157, localhost]
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
discovery.type: single-node
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

# Enable security
xpack.security.enabled: true

# Enable auditing if you want, uncomment
# xpack.security.audit.enabled: true

# SSL Settings
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: certs/elastic-certificates.p12
xpack.security.http.ssl.truststore.path: certs/elastic-certificates.p12

xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: certs/elastic-cer

logstash.yml

path.data: /var/lib/logstash
pipeline.ordered: auto
http.host: "localhost"
path.logs: /var/log/logstash

conf.d in logstash

input {
  beats {
    host => "0.0.0.0"
    port => 5044
    ssl => false
  }
}

output {
  elasticsearch {
    hosts => ["https://localhost:9200"]
    user => "elastic"
    password => "********"
    cacert => "/etc/pki/root/selfca.pem"
    manage_template => false
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
  }
  stdout { codec => rubydebug }
}

Kibana Certs

sudo -i
cd /etc/kibana
mkdir certs
chmod 755 certs
cd certs
openssl req -newkey rsa:4096 -x509 -sha256  -days 365 -nodes -out kibana.crt -keyout kibana.key
openssl x509 -in kibana.crt -out kibana.pem
cp /etc/elasticsearch/certs/selfca.pem .   <!---- This is the elasticsearch selfca.pem put into the kibana/certs
chmod 644 *

in the kibana.yml

# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
elasticsearch.ssl.certificateAuthorities: [ "/etc/kibana/certs/selfca.pem" ]

^^^ that is the elasticsearch selfca.pem

With respect to logstash did you save your conf ?

The log says this
dead ES instance, but got an error {:url=>"http://192.168.186.157:9200/". <!!-- NOTE http not https

but your logstash.conf says

hosts => ["https://localhost:9200"]

Also in my opinion you should be consistent how you refer to the host either by IPs or by localhost or hostname... you seem to be mix and match a bit... just an observation.

I'm saying that you don't have the line mkdir /etc/pki/root, but not command create selfca.pem

And yes, I save all my config. Let me show you

root@ubuntu:/etc/logstash# cat conf.d/ssh.conf 
input {
  beats {
    host => "0.0.0.0"
    port => 5044
    ssl => false
  }
}

filter {
  
    if [fileset][name] == "auth" {
      grok {
    	match => { "message" => "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:\[%{POSINT:[system][auth][pid]}\])?: %{DATA:[system][auth][ssh][event]} for (invalid user )?%{DATA:[system][auth][user]} from %{IPORHOST:[system][auth][ssh][ip]} port %{NUMBER:[system][auth][ssh][port]} ssh2" }

	add_field => [ "activity","SSH Logins" ]

    	add_tag => "linux_auth"

      }

      if "_grokparsefailure" in [tags] { drop {} }

      date {
        match => [ "[system][auth][timestamp]", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
      }

    }
    else if [fileset][name] == "syslog" {
      grok {
      	match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
	add_tag => "linux_syslog"
    }
      
    }
  
}

output {
  elasticsearch {
    hosts => ["https://localhost:9200"]
    user => "elastic"
    password => "******"
    cacert => "/etc/pki/root/selfca.pem"
    manage_template => false
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
  }
  stdout { codec => rubydebug }
}

About the mix, I change all that to localhost.

The error changed too.

[2021-08-17T05:22:55,275][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://192.168.186.157:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://192.168.186.157:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2021-08-17T05:23:00,286][DEBUG][logstash.outputs.elasticsearch][main] Running health check to see if an ES connection is working {:url=>"http://192.168.186.157:9200/", :path=>"/"}
[2021-08-17T05:23:00,458][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://192.168.186.157:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://192.168.186.157:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2021-08-17T05:23:05,461][DEBUG][logstash.outputs.elasticsearch][main] Running health check to see if an ES connection is working {:url=>"http://192.168.186.157:9200/", :path=>"/"}
[2021-08-17T05:23:05,463][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://192.168.186.157:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://192.168.186.157:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2021-08-17T05:23:10,466][DEBUG][logstash.outputs.elasticsearch][main] Running health check to see if an ES connection is working {:url=>"http://192.168.186.157:9200/", :path=>"/"}
[2021-08-17T05:23:10,476][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://192.168.186.157:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://192.168.186.157:9200/][Manticore::SocketException] Connection refused (Connection refused)"}

Maybe I need restart my computer?

I did not do every step for logstash... that is an exercise left to the student I figured at that point you could create the directory and put the selfca.pem in it.

It seems like you cat conf.d/ssh.conf is not being loaded the fact that the error says http not https in the error message ... are you sure logstash is stopped before you tried to start it again?

Are you sure that conf.d/ssh.conf is being loaded...
Are there other confs with other outputs defined?

Yup. Need to restart my computer.

I did that and everything worked perfectly.

Thanks for helping me! Stucking in this TLS for two day and I'm going insane. :smiling_face_with_three_hearts: :smiling_face_with_three_hearts:

1 Like

Awesome!!! Go create some alerts!!

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.