Hello everyone,
Our current Production Elasticsearch cluster for logs collection is manually managed and runs on AWS.
I'm creating the same cluster using ECK deployed with Helm under Terraform.
I was able to get all the features replicated (S3 repo for snapshots, ingest pipelines, index templates, etc) and deployed, so, first deployment is perfectly working.
But when I tried to update the cluster (changing the ES version from 8.3.2 to 8.5.2) I get this error:
│ Error: Provider produced inconsistent result after apply
│
│ When applying changes to kubernetes_manifest.elasticsearch_deploy, provider "provider[\"registry.terraform.io/hashicorp/kubernetes\"]" produced an unexpected new
│ value: .object: wrong final value type: attribute "spec": attribute "nodeSets": tuple required.
│
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
I stripped down my elasticsearch and kibana manifests to try to isolate the problem
I have in my main.tf:
resource "kubernetes_manifest" "elasticsearch_deploy" {
field_manager {
force_conflicts = true
}
computed_fields = ["metadata.labels", "metadata.annotations", "spec.finalizers", "spec.nodeSets", "status"]
manifest = yamldecode(templatefile("config/elasticsearch.yaml", {
version = var.elastic_stack_version
nodes = var.logging_elasticsearch_nodes_count
cluster_name = local.cluster_name
}))
}
resource "kubernetes_manifest" "kibana_deploy" {
field_manager {
force_conflicts = true
}
depends_on = [kubernetes_manifest.elasticsearch_deploy]
computed_fields = ["metadata.labels", "metadata.annotations", "spec.finalizers", "spec.nodeSets", "status"]
manifest = yamldecode(templatefile("config/kibana.yaml", {
version = var.elastic_stack_version
cluster_name = local.cluster_name
namespace = local.stack_namespace
}))
}
and my manifests are:
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
# copy the specified node labels as pod annotations and use it as an environment variable in the Pods; spreads a NodeSet across the availability zones of a Kubernetes cluster. Used for AZ awareness
annotations:
eck.k8s.elastic.co/downward-node-labels: "topology.kubernetes.io/zone"
name: ${cluster_name}
namespace: ${namespace}
spec:
version: ${version}
volumeClaimDeletePolicy: DeleteOnScaledownAndClusterDeletion
monitoring.html
monitoring:
metrics:
elasticsearchRefs:
- name: ${cluster_name}
logs:
elasticsearchRefs:
- name: ${cluster_name}
nodeSets:
- name: logging-nodes
count: ${nodes}
config:
node.store.allow_mmap: false
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: ${cluster_name}
namespace: ${namespace}
spec:
version: ${version}
count: 1
elasticsearchRef:
name: ${cluster_name}
monitoring:
metrics:
elasticsearchRefs:
- name: ${cluster_name}
logs:
elasticsearchRefs:
- name: ${cluster_name}
podTemplate:
metadata:
labels:
stack_name: ${stack_name}
stack_repository: ${stack_repository}
spec:
serviceAccountName: ${service_account}
containers:
- name: kibana
resources:
limits:
memory: 1Gi
cpu: "1"
When I change the version, testing a cluster upgrade, I get the error mentioned at the beginning of this post.
Is it a eck operator bug or I'm doing something wrong?
Do I need to add some other entity in the 'computed_fields' and remove 'force_conflicts'?