We are upgrading Elasticsearch version 7.17 to 8.5.1 to Kubernetes along with logstash and Kibana . We don't necessary want to remove PVC, because it will lead to a data loss. While upgrading we continuously getting ERROR in master nodes as below :
"log.level":"ERROR", "message":"fatal exception while booting Elasticsearch", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.bootstrap.Elasticsearch","elasticsearch.node.name":"elasticsearch-master-0","elasticsearch.cluster.name":"elasticsearch","error.type":"java.lang.IllegalStateException","error.message":"node does not have the data role but has shard data: [/usr/share/elasticsearch/data/indices/yl7n8CeDTqGmVBMRNMrCRQ/0, /usr/share/elasticsearch/data/indices/cnOc-7LRQeGXVF1gFdSAlw/0, /usr/share/elasticsearch/data/indices/sakNU1p-QbSnaJR5Q_HqMQ/0, /usr/share/elasticsearch/data/indices/O91BqBguQtWgjUCHa1BPFg/0]. Use 'elasticsearch-node repurpose' tool to clean up","error.stack_trace":"java.lang.IllegalStateException: node does not have the data role but has shard data: [/usr/share/elasticsearch/data/indices/yl7n8CeDTqGmVBMRNMrCRQ/0, /usr/share/elasticsearch/data/indices/cnOc-7LRQeGXVF1gFdSAlw/0, /usr/share/elasticsearch/data/indices/sakNU1p-QbSnaJR5Q_HqMQ/0, /usr/share/elasticsearch/data/indices/O91BqBguQtWgjUCHa1BPFg/0]. Use 'elasticsearch-node repurpose' tool to clean up\n\tat org.elasticsearch.server@8.5.1/org.elasticsearch.env.NodeEnvironment.ensureNoShardData(NodeEnvironment.java:1296)\n\tat org.elasticsearch.server@8.5.1/org.elasticsearch.env.NodeEnvironment.(NodeEnvironment.java:323)\n\tat org.elasticsearch.server@8.5.1/org.elasticsearch.node.Node.(Node.java:474)\n\tat org.elasticsearch.server@8.5.1/org.elasticsearch.node.Node.(Node.java:318)\n\tat org.elasticsearch.server@8.5.1/org.elasticsearch.bootstrap.Elasticsearch$2.(Elasticsearch.java:214)\n\tat org.elasticsearch.server@8.5.1/org.elasticsearch.bootstrap.Elasticsearch.initPhase3(Elasticsearch.java:214)\n\tat org.elasticsearch.server@8.5.1/org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:67)\n"}
We have separate statefulsets for elasticsearch master and data , please find the details below :
clusterName: elasticsearch
nodeGroup: master
masterService: elasticsearch
roles: master
replicas: 3
esConfig:
elasticsearch.yml: >
network.host: 0.0.0.0
transport.host: 0.0.0.0
xpack.security.enabled: true
xpack.monitoring.collection.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.http.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
action.auto_create_index: true
action.destructive_requires_name: true
esJvmOptions: {}
envFrom:
secretMounts:
- name: elastic-certificates-p12
secretName: elastic-certificates-p12
path: /usr/share/elasticsearch/config/certs
hostAliases:
image: docker.elastic.co/elasticsearch/elasticsearch
imageTag: 8.5.1
imagePullPolicy: IfNotPresent
podAnnotations:
traffic.sidecar.istio.io/excludeInboundPorts: "9300"
traffic.sidecar.istio.io/excludeOutboundPorts: "9300"
labels: {}
esJavaOpts: ""
serviceAccountAnnotations: {}
serviceAccountName: ""
automountToken: false
podSecurityPolicy:
create: false
name: ""
spec:
privileged: true
persistence:
enabled: true
labels:
enabled: false
annotations: {}
antiAffinityTopologyKey: kubernetes.io/hostname
antiAffinity: hard
nodeAffinity: {}
podManagementPolicy: Parallel
enableServiceLinks: true
protocol: https
httpPort: 9200
transportPort: 9300
service:
enabled: true
labels: {}
labelsHeadless: {}
type: ClusterIP
publishNotReadyAddresses: false
nodePort: ""
annotations: {}
httpPortName: http
transportPortName: transport
loadBalancerIP: ""
loadBalancerSourceRanges:
externalTrafficPolicy: ""
updateStrategy: RollingUpdate
maxUnavailable: 1
nodeSelector:
env: dev-node
tolerations:
ingress:
hosts:- host: chart-example.local
paths:- path: /
lifecycle: {}
sysctlInitContainer:
enabled: false
networkPolicy:
http:
enabled: false
transport:
enabled: false
tests:
enabled: false
- path: /
- host: chart-example.local
Do you have any idea why were are getting the Error while upgrading without PVC ?