How to setup a second elastic cluster

Hi,

I want to create a second elastic cluster for testing and view the data in Kibana.

My end goal is to setup fluentd so i can store the logs in elastic search and view them on kibana. But i think i should do this on a second elastic cluster to see how it works.

The first/initial elastic cluster was setup uising the following:

---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: es-gp2
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
reclaimPolicy: Delete
---
apiVersion: elasticsearch.k8s.elastic.co/v1beta1
kind: Elasticsearch
metadata:
  name: data-es
spec:
  version: 7.4.2
  spec:
    http:
      tls:
        certificate:
          secretName: es-cert
  nodeSets:
  - name: default
    count: 2
    volumeClaimTemplates:
    - metadata:
        name: es-data
        annotations:
          volume.beta.kubernetes.io/storage-class: es-gp2
      spec:
        accessModes:
        - ReadWriteOnce
        storageClassName: es-gp2
        resources:
          requests:
            storage: 10Gi
    config:
      node.master: true
      node.data: true
      node.ingest: true
      node.store.allow_mmap: false
      xpack.security.authc.realms:
        native:
          native1: 
            order: 1
---
apiVersion: kibana.k8s.elastic.co/v1beta1
kind: Kibana
metadata:
  name: data-kibana
spec:
  version: 7.4.2
  count: 1
  elasticsearchRef:
    name: data-es

To what extent, can i (should i) extend the above to have a second elastic cluster.

  1. Is it possible for me to simply create a new kind: Elasticsearch change the name to something more descriptive of a second cluster e.g cluster-2.
  2. Is it possible to point to a new storage for the new cluster-2 ? or should i use the existing storage.
  3. Is it possible to use the same kibana instance to view data across multiple clusters because i can see the kibana definition has an elasticsearchRef which is a reference to a single cluster.

thank you.

You will need a docker or a vm to run multiple elasticsearch nodes on the same pc.

  1. You can change the cluster name to your liking. Just change cluster.name: whatever in the yml
  2. Yes you can point to a new storage path.data: C:\\path\\to\\your\\data (On windows)
  3. You will have to decide which cluster you want to look at. In the kibana.yml you can set server.host: "YourElasticIP" to connect to a different node.

Cheers,
defalt

You can create as many Elasticsearch clusters as you like provided that each one has a unique name (metadata.name) and there are enough resources (CPU, memory, and disks) available in your Kubernetes cluster.

You can reuse the storage class name (es-gp2 in this case) as it merely describes parameters for persistent volumes. When the operator is asked to create a new Elasticsearch cluster, it creates new persistent volumes for each node in the cluster using the configuration specified in volumeClaimTemplates. You can read more on this at https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-volume-claim-templates.html and https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-orchestration.html.

You can't connect more than one Elasticsearch cluster to Kibana. You could setup cross-cluster search to search across clusters. However, you seem to be using the beta version of ECK and might need to upgrade to atleast version 1.1.2 to get better support for setting up remote clusters as documented here: https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-remote-clusters.html.

@ charith-elastic

  1. When i create a second elasticsearch cluster is there anything else i need to configure or do i just need to change the metadata.name? I am using ECK https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-eck.html which i believe does a lot of the heavy lifting. So im wondering if i need to configure something else or will it automatically pick up the second cluster.

Do you have some method i can use to confirm this second cluster is working alongside the first one?

  1. How can i upgrade my ECK? https://www.elastic.co/guide/en/cloud-on-k8s/1.0/k8s-upgrading-eck.html This link suggests to just follow the quickstart guide " To upgrade from 1.0.0-beta1, follow the Quickstart ." . So is it essentially just running?

kubectl apply -f https://download.elastic.co/downloads/eck/1.1.2/all-in-one.yaml

After running this do i need to rerun the Kind: Elasticsearch and the service?

In this particular case, you can simply make a copy of your existing YAML file containing the Elasticsearch definition, change metadata.name, and kubectl apply to get a new Elasticsearch cluster identical to your existing one except for the cluster name.

Just in case it wasn't clear, you are not limited to creating a single type of Elasticsearch (or Kibana or APM server) with ECK. You can create as many as you like with differing configurations. That's the advantage of using an operator.

You can update an existing installation by running the kubectl apply command as described in the documentation. Existing resources (Elasticsearch, Kibana, APM Server) will continue to work (they may go through a rolling restart though). However, I would advise reading the release notes to make sure there are no breaking changes that affect you, testing the upgrade on a non-production cluster first, and taking backups of your data before upgrading production.

1 Like

Can you point me to some documentation on how i can backup the existing index's. Also something to note I am using aws volumes as StorageClass. Does that make it easier/harder?

You can use snapshot and restore to create backups. There is an example in ECK documentation using GCS. You can configure the S3 repository plugin in a similar way.