Hello, I have a cluster with 1 master and 2 nodes, if I install the enterpise license on a node, will it move to the other node or to the master?
Welcome to our community!
A license is applied to all nodes in the cluster. Which means yes, it will be added to the other nodes.
Can you tell me how can I implement multi-tenancy and how can I implement a license in one node and it won't be added to other nodes ?
ᐧ
Mark Walkom via Discuss the Elastic Stack <notifications@elastic.discoursemail.com> escreveu no dia terça, 4/04/2023 à(s) 02:48:
Multi-tenancy depends on your indexing strategy, it is done by giving specific permissions according to the indices that each tenant can access and in Kibana you can use Spaces to help with that.
Not possible, the license is applied on the cluster, all nodes on the same cluster will have the same license.
Is there any way I can implement multi-tenancy and in each elastic have a different license ?
Only with separate clusters.
Can you tell me then how can I implement multi-tenancy with separate clusters ?
I'm not sure what is your question, you need to have one cluster per tenant, that is how you implement multi-tenancy with separate cluster.
Every tenant would have access to a different cluster running on different places.
Can you provide more context about what you are trying to do? it is not clear.
Also, are you running on-premises or using Elastic Cloud?
I have 3 elasticsearch clusters installed locally and I want to connect 2 clusters to one cluster. I'm just not getting it. I have port 9300/9200 open but it is not connecting. These are my settings.
cluster.name:client_1
cluster.remote.central.seeds: ["192.168.10.67:9300"]
http.port: 9200
network.host: 192.168.10.73
The cluster I'm trying to connect to is also configured this way.
Are you talking about nodes or clusters ? A Node is an Elasticsearch instance that is part of a cluster, a cluster is composed of one or more elasticsearch nodes.
You will need to share the entire elasticsearch.yml
of all your nodes.
You mean connect to search? You need to follow the documentation on how to configure cross cluster search.
I want them to connect so I can monitor later. I've already seen this documentation and followed all the steps but I can't connect the clusters
Then you need to share the Elasticserch logs that would indicate an issue, without the logs is not possible to know what is the issue.
Monitor in wich way?
To be more clear this is what I want to do. I have two clusters with 1 node in each cluster and I want to connect node 192.168.10.74 to node 192.168.10.67, these nodes are in different clusters.
As I said, you need to share logs that indicate some issue, without it is not possible to know.
Also, avoid sharing screenshots of configurations, just copy and share the plain text using the Preformatted text button, the </>
button, it makes it easier to read.
What is the return of the request: GET /_remote/info
?
What do you have when you click in the ?
on the Not Connected status?
What logs do you want me to show you ?
This was the output of the request: GET /_remote/info -
curl -k -u elastic:techbase -X GET "https://192.168.10.67:9200/_remote/info?pretty"
{
"Client_1" : {
"connected" : false,
"mode" : "sniff",
"seeds" : [
"192.168.10.73:9300"
],
"num_nodes_connected" : 0,
"max_connections_per_cluster" : 3,
"initial_connect_timeout" : "30s",
"skip_unavailable" : false
}
}
? = Ensure the seed nodes are configured with the remote cluster's transport port, not the http port.
You need to look at both your Elasticsearch cluster logs and see if there is anything in the logs that would indicate any issue while it tries to connect to the remote cluster.
Cluster
[2023-04-05T17:13:03,678][INFO ][o.e.p.PluginsService ] [techbase] loaded module [x-pack-security]
[2023-04-05T17:13:03,679][INFO ][o.e.p.PluginsService ] [techbase] loaded module [x-pack-shutdown]
[2023-04-05T17:13:03,679][INFO ][o.e.p.PluginsService ] [techbase] loaded module [x-pack-sql]
[2023-04-05T17:13:03,679][INFO ][o.e.p.PluginsService ] [techbase] loaded module [x-pack-stack]
[2023-04-05T17:13:03,679][INFO ][o.e.p.PluginsService ] [techbase] loaded module [x-pack-text-structure]
[2023-04-05T17:13:03,680][INFO ][o.e.p.PluginsService ] [techbase] loaded module [x-pack-voting-only-node]
[2023-04-05T17:13:03,680][INFO ][o.e.p.PluginsService ] [techbase] loaded module [x-pack-watcher]
[2023-04-05T17:13:03,680][INFO ][o.e.p.PluginsService ] [techbase] loaded module [x-pack-write-load-forecaster]
[2023-04-05T17:13:03,680][INFO ][o.e.p.PluginsService ] [techbase] no plugins loaded
[2023-04-05T17:13:08,856][WARN ][stderr ] [techbase] Apr 05, 2023 5:13:08 PM org.apache.lucene.store.MemorySegmentIndexInputProvider <init>
[2023-04-05T17:13:08,869][INFO ][o.e.e.NodeEnvironment ] [techbase] using [1] data paths, mounts [[/ (/dev/sda1)]], net usable_space [25.6gb], net total_space [30.3gb], type>
[2023-04-05T17:13:08,870][INFO ][o.e.e.NodeEnvironment ] [techbase] heap size [3.8gb], compressed ordinary object pointers [true]
<old, data, remote_cluster_client, master, data_warm, data_content, transform, data_hot, ml, data_frozen, ingest]
[2023-04-05T17:13:13,604][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [techbase] [controller/2744] [Main.cc@123] controller (64 bit): Version 8.7.0 (Build e4e1c23721e58c) Copyrigh>
[2023-04-05T17:13:13,961][INFO ][o.e.x.s.Security ] [techbase] Security is enabled
[2023-04-05T17:13:14,692][INFO ][o.e.x.s.a.s.FileRolesStore] [techbase] parsed [0] roles from file [/etc/elasticsearch/roles.yml]
[2023-04-05T17:13:15,303][INFO ][o.e.x.s.InitialNodeSecurityAutoConfiguration] [techbase] Auto-configuration will not generate a password for the elastic built-in superuser, as>
[2023-04-05T17:13:15,802][INFO ][o.e.x.p.ProfilingPlugin ] [techbase] Profiling is enabled
[2023-04-05T17:13:16,953][INFO ][o.e.t.n.NettyAllocator ] [techbase] creating NettyAllocator with the following configs: [name=elasticsearch_configured, chunk_size=1mb, sugge>
[2023-04-05T17:13:16,992][INFO ][o.e.i.r.RecoverySettings ] [techbase] using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b]
[2023-04-05T17:13:17,065][INFO ][o.e.d.DiscoveryModule ] [techbase] using discovery type [multi-node] and seed hosts providers [settings]
[2023-04-05T17:13:19,241][INFO ][o.e.n.Node ] [techbase] initialized
[2023-04-05T17:13:19,245][INFO ][o.e.n.Node ] [techbase] starting ...
[2023-04-05T17:13:19,271][INFO ][o.e.x.s.c.f.PersistentCache] [techbase] persistent cache index loaded
[2023-04-05T17:13:19,272][INFO ][o.e.x.d.l.DeprecationIndexingComponent] [techbase] deprecation component started
[2023-04-05T17:13:19,416][INFO ][o.e.t.TransportService ] [techbase] publish_address {192.168.10.67:9300}, bound_addresses {192.168.10.67:9300}
[2023-04-05T17:13:19,611][INFO ][o.e.b.BootstrapChecks ] [techbase] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2023-04-05T17:13:19,617][INFO ][o.e.c.c.ClusterBootstrapService] [techbase] this node has not joined a bootstrapped cluster yet; [cluster.initial_master_nodes] is set to [tech>
[2023-04-05T17:13:19,633][INFO ][o.e.c.c.Coordinator ] [techbase] setting initial configuration to VotingConfiguration{9QtVsolBTxmgAdfTupmgGg}
[2023-04-05T17:13:19,933][INFO ][o.e.c.s.MasterService ] [techbase] elected-as-master ([1] nodes joined)[_FINISH_ELECTION_, {techbase}{9QtVsolBTxmgAdfTupmgGg}{T0WoZUWuQpicF7>
[2023-04-05T17:13:19,996][INFO ][o.e.c.c.CoordinationState] [techbase] cluster UUID set to [fFznsIZ8QDCWph5g6vzaGQ]
[2023-04-05T17:13:20,048][INFO ][o.e.c.s.ClusterApplierService] [techbase] master node changed {previous [], current [{techbase}{9QtVsolBTxmgAdfTupmgGg}{T0WoZUWuQpicF741EmucrQ}>
[2023-04-05T17:13:20,126][INFO ][o.e.r.s.FileSettingsService] [techbase] starting file settings watcher ...
[2023-04-05T17:13:20,147][INFO ][o.e.h.AbstractHttpServerTransport] [techbase] publish_address {192.168.10.67:9200}, bound_addresses {[::]:9200}
[2023-04-05T17:13:20,154][INFO ][o.e.c.c.NodeJoinExecutor ] [techbase] node-join: [{techbase}{9QtVsolBTxmgAdfTupmgGg}{T0WoZUWuQpicF741EmucrQ}{techbase}{192.168.10.67}{192.168.1>
[2023-04-05T17:13:20,136][INFO ][o.e.r.s.FileSettingsService] [techbase] file settings service up and running [tid=60]
[2023-04-06T00:05:24,879][WARN ][o.e.c.c.Coordinator ] [techabse] This node is a fully-formed single-node cluster with cluster UUID [fFznsIZ8QDCWph5g6vzaGQ], but it is configured as if to discover other nodes and form a multi-node cluster via the [discovery.seed_hosts=[192.168.10.73:9300, 192.168.10.74:9300]] setting. Fully-formed cluster do not attempt to discover other nodes, and nodes with different cluster UUIDs cannot belong to the same cluster. The cluster UUID persists across restarts and can only be changed by deleting the contents of the node's data path(s). Remove the discovery configuration to suppress this message.
All I wanna do is set up a Cross-Cluster but I am having trouble connecting the clusters