Unable to use User Authentication in Elasticsearch and Elastic-App-Search

I have installed Elasticsearch and Elastic-App-Search. Currently it is open to public(i.e: Anyone with the url can see the engines. I want to have User Authentication on the url so that it cannot be accessed by anyone. I am following the document link Manage users and access to App Search | Elastic App Search Documentation [8.4] | Elastic . The steps mentioned in this link is done but when I am restarting the elastic-app-search it is not restarting and giving me the error that "unable to authenticate user with elastic" I am using correct elastic user and creds in the enterprise-search.yaml. Can anyone help me with this issue. I am using elastic-app-search with basic(standard) license.
Thanks in Advance for the help!

Hi!

Could you please attach the content of your config files from 1) Elasticsearch and 2) Elastic App Search? Make sure to hide all sensitive data.

Hi,
Here is the elasticsearch.yml file:(Path /etc/elasticsearch/elasticsearch.yml)

***********************************************
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: 0.0.0.0
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: ["127.0.0.1", "::1"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true


xpack.security.enabled: true
#xpack.security.authc.api_key.enabled: true

xpack.security.http.ssl.enabled: true
xpack.security.transport.ssl.enabled: true

******************************************

AND Here is the yaml file of enterprise-search.yml: (/use/share/enterprise-search/config/enterprise-search.yml)

## ================= Elastic Enterprise Search Configuration ==================

# ---------------------------------- Secrets ----------------------------------
#
# Encryption keys to protect your application secrets. This field is required.
#
secret_management.encryption_keys: [c594eb6169dc48fc35e06f550db0cc1a796bbfb4fa258be2baf0cbe2a46e8098]
#
# ------------------------------- Elasticsearch -------------------------------

#
allow_es_settings_modification: true
#
# Elasticsearch full cluster URL:
#
elasticsearch.host: http://0.0.0.0:9200
#
# Elasticsearch credentials:
#
elasticsearch.username: elastic
elasticsearch.password: elastic-password
#
# Elasticsearch custom HTTP headers to add to each request:
#
#elasticsearch.headers:
#  X-My-Header: Contents of the header
#
# Elasticsearch SSL settings:
#
#elasticsearch.ssl.enabled: false
#elasticsearch.ssl.certificate:
#elasticsearch.ssl.certificate_authority:
#elasticsearch.ssl.key:
#elasticsearch.ssl.key_passphrase:
#elasticsearch.ssl.verify: true
#
# Elasticsearch startup retry:
#
#elasticsearch.startup_retry.enabled: true
#elasticsearch.startup_retry.interval: 5 # seconds
#elasticsearch.startup_retry.fail_after: 600 # seconds
#
# ---------------------------------- Kibana -----------------------------------
#
# The primary URL at which users interact with Kibana. This is used when
# Enterprise Search links users to Kibana.
#
#kibana.external_url: http://localhost:5601
#
# ------------------------------- Hosting & Network ---------------------------
#
# Define the exposed URL at which users will reach Enterprise Search.
# Defaults to localhost:3002 for testing purposes.
# Most cases will use one of:
#
# * An IP: http://255.255.255.255
# * A FQDN: http://example.com
# * Shortname defined via /etc/hosts: http://ent-search.search
#
ent_search.external_url: http://ServerPublicIp:3002
#
# Web application listen_host and listen_port.
# Your application will run on this host and port.
#
# * ent_search.listen_host: Must be a valid IPv4 or IPv6 address.
# * ent_search.listen_port: Must be a valid port number (1-65535).
#
ent_search.listen_host: 0.0.0.0
ent_search.listen_port: 3002
#
# ------------------------------ Authentication -------------------------------

#ent_search.auth.<auth_name>
#
# The origin of authenticated Enterprise Search users.
# Options are standard, elasticsearch-native, and elasticsearch-saml.
#
# Docs: https://www.elastic.co/guide/en/workplace-search/current/workplace-search-security.html
#
# * standard: Users are created within the Enterprise Search dashboard.
# * elasticsearch-native: Users are managed via the Elasticsearch native realm.
# * elasticsearch-saml: Users are managed via the Elasticsearch SAML realm.
#
#ent_search.auth.<auth_name>.source:
ent_search.auth.default.source: standard
#
#
#ent_search.auth.<auth_name>.order:
#
# The name to be displayed on the login screen associated with this provider.
#
#ent_search.auth.<auth_name>.description:
#
# The URL to an icon to be displayed on the login screen associated with this
# provider.
#
#ent_search.auth.<auth_name>.icon:
#
#
#ent_search.auth.<auth_name>.hidden: false
#
#
#ent_search.login_assistance_message:
#
# ---------------------------------- Limits -----------------------------------
#
# Configurable limits for Enterprise Search.
#
#workplace_search.content_source.document_size.limit: 100kb
#
# Configure how many fields a content source can have.
# NOTE: The Elasticsearch/Lucene setting `indices.query.bool.max_clause_count`
# might also need to be adjusted if "Max clause count exceeded" errors start
# occurring. See more here: https://www.elastic.co/guide/en/elasticsearch/reference/current/search-settings.html
#
#workplace_search.content_source.total_fields.limit: 64
#
#workplace_search.content_source.sync.max_errors: 1000
#
#
#workplace_search.content_source.sync.max_consecutive_errors: 10
#
#
#workplace_search.content_source.sync.max_error_ratio: 0.15
#
# Configure how large of a window to consider when calculating an error ratio
# (see `workplace_search.content_source.sync.max_error_ratio`).
#
#workplace_search.content_source.sync.error_ratio_window_size: 100
#
# Configure whether or not a content source should generate thumbnails for the documents
# it syncs. Not all file types/sizes/content or Content Sources support thumbnail generation,
# even if this is enabled.
#
#workplace_search.content_source.sync.thumbnails.enabled: true
#
#### App Search
#
# Configure the maximum allowed document size.
#
#app_search.engine.document_size.limit: 100kb
#
#app_search.engine.total_fields.limit: 64
#
# Configure how many source engines a meta engine can have.
#
#app_search.engine.source_engines_per_meta_engine.limit: 15
#
# Configure how many facet values can be returned by a search.
#
#app_search.engine.total_facet_values_returned.limit: 250
#
#app_search.engine.query.limit: 128
#
# Configure total number of synonym sets an engine can have.
#
#app_search.engine.synonyms.sets.limit: 256
#
# Configure total number of terms a synonym set can have.
#
#app_search.engine.synonyms.terms_per_set.limit: 32
#
# Configure how many analytics tags can be associated with a single query or clickthrough.
#
#app_search.engine.analytics.total_tags.limit: 16
#
# ---------------------------------- Workers ----------------------------------
#
# Configure the number of worker threads.
#
#worker.threads: 1
#
# ----------------------------------- APIs ------------------------------------
#
# Set to true hide product version information from API responses.
#
#hide_version_info: false
#
# ---------------------------------- Mailer -----------------------------------
#
# Connect Enterprise Search to a mailer.
# Docs: https://www.elastic.co/guide/en/workplace-search/current/workplace-search-smtp-mailer.html
#
#
# ---------------------------------- Logging ----------------------------------
#
# Choose your log export path.
#
log_directory: /var/log/enterprise-search
#
# Log level can be: debug, info, warn, error, fatal, or unknown.
#
#log_level: info
#
# Log format can be: default, json
#
#log_format: default
#
# Choose your Filebeat logs export path.
#
filebeat_log_directory: /var/log/enterprise-search
#
# Use Index Lifecycle Management (ILM) to manage analytics and API logs
# retention.
#
# Docs: https://www.elastic.co/guide/en/app-search/current/logs.html
#
#ilm.enabled: auto
#
# Enable logging app logs to stdout (enabled by default).
#
#enable_stdout_app_logging: true
#
# The number of files to keep on disk when rotating logs. When set to 0, no
# rotation will take place.
#
#log_rotation.keep_files: 7
#
#log_rotation.rotate_every_bytes: 1048576 # 1 MiB
#
# ---------------------------------- TLS/SSL ----------------------------------
#
# Configure TLS/SSL encryption.
#
#ent_search.ssl.enabled: false
#ent_search.ssl.keystore.path:
#ent_search.ssl.keystore.password:
#ent_search.ssl.keystore.key_password:
#ent_search.ssl.redirect_http_from_port:
#
# ---------------------------------- Session ----------------------------------
#
# Set a session key to persist user sessions through process restarts.
#
#secret_session_key:
#
# --------------------------------- Telemetry ---------------------------------
#
# Reporting your basic feature usage statistics helps us improve your user
# experience. Your data is never shared with anyone.
#
# Set to false to disable telemetry capabilities entirely. You can alternatively
# opt out through the Settings page.
#
#telemetry.enabled: true
#
#telemetry.opt_in: true
#telemetry.allow_changing_opt_in_status: true
#
# ----------------------------- Diagnostics report ----------------------------
#
# Path where diagnostic reports will be generated.
#
#diagnostic_report_directory: diagnostics
#
# ------------------------------ Crawler Preview ------------------------------
#
# The User-Agent HTTP Header used for the Crawler.
#
#crawler.http.user_agent: Elastic Crawler (<crawler_version_number>)
#
#crawler.http.user_agent_platform:
#
# The number of parallel crawls allowed per instance of Enterprise Search.
# By default, it is set to 2x the number of available CPU cores.
#
#crawler.workers.pool_size.limit: N
#
# -------------------------
# Per-crawl Resource Limits
# -------------------------
#
#crawler.crawl.max_duration.limit: 86400 # seconds
#
#
#crawler.crawl.max_crawl_depth.limit: 10
#
#
#crawler.crawl.max_url_length.limit: 2048
#
#
#crawler.crawl.max_url_segments.limit: 16
#
#
#crawler.crawl.max_url_params.limit: 32
#
# The maximum number of unique URLs the crawler will index during a single crawl.
# Beyond this limit, the crawler will stop.
#
#crawler.crawl.max_unique_url_count.limit: 100000
#
# -------------------------
# Advanced Per-crawl Limits
# -------------------------
#
# The number of parallel threads to use for each crawl.
# The main effect from increasing this value will be an increased throughput
# of the crawler at the expense of higher CPU load on Enterprise Search and
# Elasticsearch instances as well as higher load on the website being crawled.
#
#crawler.crawl.threads.limit: 10
#
# The maximum size of the crawl frontier - the list of URLs the crawler needs to visit.
# The list is stored in Elasticsearch, so the limit could be increased as long
# as the Elasticsearch cluster has enough resources (disk space) to hold the queue index.
#
#crawler.crawl.url_queue.url_count.limit: 100000
#
# ---------------------------
# Per-Request Timeout Limits
# ---------------------------
#
# The maximum period to wait until abortion of the request, when a connection is being initiated.
#
#crawler.http.connection_timeout: 10 # seconds
#
# The maximum period of inactivity between two data packets, before the request is aborted.
#
#crawler.http.read_timeout: 10 # seconds
#
# The maximum period of the entire request, before the request is aborted.
#
#crawler.http.request_timeout: 60 # seconds
#
# ---------------------------
# Per-Request Resource Limits
# ---------------------------
#
# The maximum size of an HTTP response (in bytes) supported by the crawler.
#
#crawler.http.response_size.limit: 10485760
#
# The maximum number of HTTP redirects before a request is failed.
#
#crawler.http.redirects.limit: 10
#
# ----------------------------------
# Content Extraction Resource Limits
# ----------------------------------
#
# The maximum size (in bytes) of some fields extracted from crawled pages
#
#crawler.extraction.title_size.limit: 1024
#crawler.extraction.body_size.limit: 5242880
#crawler.extraction.keywords_size.limit: 512
#crawler.extraction.description_size.limit: 1024
#
# The maximum number of links extracted from each page for further crawling
#
#crawler.extraction.extracted_links_count.limit: 1000
#
# The maximum number of links extracted from each page and indexed in a document
#
#crawler.extraction.indexed_links_count.limit: 25
#
# The maximum number of HTML headers to be extracted from each page
#
#crawler.extraction.headings_count.limit: 25
#
# Document fields used to compare documents during de-duplication
#
#crawler.extraction.content_hash_include: ['document_title', 'document_body', 'meta_keywords', 'meta_description', 'links', 'headings']
#
# -----------------------------
# Crawler DNS Security Controls
# -----------------------------
#
# WARNING: The settings in this section could make your deployment vulnerable to
# SSRF attacks (especially in cloud environments) from the owners of any domains
# you crawl. Do not enable any of the settings here unless you fully control DNS
# domains you access with the crawler.
#
# See https://owasp.org/www-community/attacks/Server_Side_Request_Forgery for
# more details on the SSRF attack and the risks associated with it.
#
# Allow crawler to access the localhost (127.0.0.0/8 IP namespace)
#
#crawler.security.dns.allow_loopback_access: false
#
# Allow crawler to access the private IP space: link-local, network-local addresses, etc
# (see https://en.wikipedia.org/wiki/Reserved_IP_addresses#IPv4 for more details)
#
#crawler.security.dns.allow_private_networks_access: false
#
# ------------------------------ Read-only mode -------------------------------
#
# If true, pending migrations can be executed without enabling read-only mode.
# Proceeding with migrations while indices are allowing writes can have
# unintended consequences. Use at your own risk, should not be set to true when
# upgrading a production instance with ongoing traffic.
#
#skip_read_only_check: false
#

When i am trying to enable the things which are mentioned in the Security & Users | Elastic App Search Documentation [7.13] | Elastic

It's giving me the error when starting up the elasticsearch. Below is the attached screenshort.

I noticed that you have xpack.security.authc.api_key.enabled: true commented out in elasticsearch.yml config. Based on the installation documentation, this is a required flag for running Enterprise Search (see step 2 here: Installation | Elastic Enterprise Search Documentation [7.13] | Elastic).

I didn't find anything wrong with enterprise-search.yml config.

If i uncommecnt the "xpack.security.authc.api_key.enabled: true" Elasticsearch is not restarting and giving the error shared above in the link. Here is the link

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.