Elasticsearch state "Invalid" after trying to install plugin

Hi folks!
I did an ECK installation on GKE with v1beta1. After 1 month I've resized the PVs to accomodate the new storage needs. Everything went smooth and the PVs resized correctly and are shown in the monitoring section of Kibana with the correct value.
A few days ago I've been tasked to backup the indices to GCS so for that I needed to install the repository-gcs plugin. I've followed the guide on elastic.co but seems like something went wrong and now when I call kubectl get elasticsearch the output says
PHASE Invalid
Whatever I tried to do to restore it's health it's useless. I have 3 master nodes so I tried to bring down 1 master node to see if the new pod will spin up correctly but, unfortunately, the pod is spinned up but doesn't join the cluster.
My configuration is as follows:

Name: siem-test
Namespace: default
Labels:
Annotations: common.k8s.elastic.co/controller-version: 0.0.0-UNKNOWN
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"elasticsearch.k8s.elastic.co/v1beta1","kind":"Elasticsearch","metadata":{"annotations":{},"name":"siem-test","namespace":...
API Version: elasticsearch.k8s.elastic.co/v1beta1
Kind: Elasticsearch
Metadata:
Creation Timestamp: 2019-10-25T08:44:03Z
Generation: 23
Resource Version: 27989855
Self Link: /apis/elasticsearch.k8s.elastic.co/v1beta1/namespaces/default/elasticsearches/siem-test
UID: 98f623e5-f703-11e9-903e-42010a8401ee
Spec:
Http:
Service:
Metadata:
Creation Timestamp:
Spec:
Ports:
Port: 9200
Target Port: 9200
Type: LoadBalancer
Tls:
Certificate:
Node Sets:
Config:
Node . Data: false
Node . Ingest: false
Node . Master: true
Node . Ml: false
Node . Store . Allow Mmap: false
Xpack . Security . Authc . Realms:
Native:
Native 1:
Order: 1
Count: 3
Name: node-master
Pod Template:
Metadata:
Labels:
Es: master-node
Spec:
Containers:
Env:
Name: ES_JAVA_OPTS
Value: -Xms2g -Xmx2g
Name: elasticsearch
Resources:
Limits:
Cpu: 1
Memory: 4Gi
Init Containers:
Command:
sh
-c
bin/elasticsearch-plugin install --batch repository-gcs

      Name:  install-plugins
Volume Claim Templates:
  Metadata:
    Name:  elasticsearch-data
  Spec:
    Access Modes:
      ReadWriteOnce	  
    Resources:
      Requests:
        Storage:         10Gi
    Storage Class Name:  ssd
Config:
  Node . Data:                true
  Node . Ingest:              true
  Node . Master:              false
  Node . Ml:                  true
  Node . Store . Allow Mmap:  false
  Xpack . Security . Authc . Realms:
    Native:
      Native 1:
        Order:  1
Count:          5
Name:           node-data
Pod Template:
  Metadata:
    Labels:
      Es:  data-node
  Spec:
    Containers:
      Env:
        Name:   ES_JAVA_OPTS
        Value:  -Xms2g -Xmx2g
      Name:     elasticsearch
      Resources:
        Limits:
          Cpu:     2
          Memory:  4Gi
    Init Containers:
      Command:
        sh	
        -c
        bin/elasticsearch-plugin install --batch repository-gcs

      Name:  install-plugins
Volume Claim Templates:
  Metadata:
    Name:  elasticsearch-data
  Spec:
    Access Modes:
      ReadWriteOnce
    Resources:
      Requests:
        Storage:         200Gi
    Storage Class Name:  ssd

Secure Settings:
Secret Name: gcs-credentials
Update Strategy:
Change Budget:
Version: 7.4.0
Status:
Available Nodes: 7
Health: green
Phase: Invalid
Events:

I don't want to lose any of the data already stored on the nodes. I have even manually installed the repository-gcs plugin on each pod but as I have to restart the elasticsearch service, it is not recognised in the Kibana interface so I can backup the indices and redo the whole cluster.
Any thoughts on how can I narrow down this problem and try to solve it?
Any help is highly appreciated!
Radu

There's a few places I'd recommend checking. You can do a describe on the ES resource and the pods to see if there are any events that may be relevant. For the new pods that are failing to join the cluster, you can look at the container logs to see why that might be happening. And you can also look at the operator logs to see why the phase may be coming up as invalid. See the troubleshooting doc here for more info on each of these steps:
https://www.elastic.co/guide/en/cloud-on-k8s/1.0/k8s-troubleshooting.html

It may also be worth updating to the 1.0 version of the operator as there are a variety of bug fixes.

Hey @Anya_Sabo
I tried going through all of the troubleshooting steps but I'm at a dead end.
I've pulled this logs from the elastic-operator though:

> {"level":"info","@timestamp":"2020-01-27T08:55:47.487Z","logger":"controller-runtime.manager","message":"starting metrics server","ver":"1.0.0-beta1-84792e30","path":"/metrics"}
> {"level":"info","@timestamp":"2020-01-27T08:55:47.587Z","logger":"controller-runtime.controller","message":"Starting Controller","ver":"1.0.0-beta1-84792e30","controller":"apmserver-controller"}
> {"level":"info","@timestamp":"2020-01-27T08:55:47.587Z","logger":"controller-runtime.controller","message":"Starting Controller","ver":"1.0.0-beta1-84792e30","controller":"license-controller"}
> {"level":"info","@timestamp":"2020-01-27T08:55:47.587Z","logger":"controller-runtime.controller","message":"Starting Controller","ver":"1.0.0-beta1-84792e30","controller":"kibana-association-controller"}
> {"level":"info","@timestamp":"2020-01-27T08:55:47.587Z","logger":"controller-runtime.controller","message":"Starting Controller","ver":"1.0.0-beta1-84792e30","controller":"elasticsearch-controller"}
> {"level":"info","@timestamp":"2020-01-27T08:55:47.587Z","logger":"controller-runtime.controller","message":"Starting Controller","ver":"1.0.0-beta1-84792e30","controller":"kibana-controller"}
> {"level":"info","@timestamp":"2020-01-27T08:55:47.587Z","logger":"controller-runtime.controller","message":"Starting Controller","ver":"1.0.0-beta1-84792e30","controller":"trial-controller"}
> {"level":"info","@timestamp":"2020-01-27T08:55:47.587Z","logger":"controller-runtime.controller","message":"Starting Controller","ver":"1.0.0-beta1-84792e30","controller":"apm-es-association-controller"}
> {"level":"info","@timestamp":"2020-01-27T08:55:47.688Z","logger":"controller-runtime.controller","message":"Starting workers","ver":"1.0.0-beta1-84792e30","controller":"license-controller","worker count":1}
> {"level":"info","@timestamp":"2020-01-27T08:55:47.688Z","logger":"license-controller","message":"Starting reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":1,"namespace":"default","name":"siem-test"}
> {"level":"info","@timestamp":"2020-01-27T08:55:47.688Z","logger":"controller-runtime.controller","message":"Starting workers","ver":"1.0.0-beta1-84792e30","controller":"kibana-association-controller","worker count":1}
> {"level":"info","@timestamp":"2020-01-27T08:55:47.688Z","logger":"kibana-association-controller","message":"Starting reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":1,"namespace":"default","name":"siem-test-exposed"}
> {"level":"info","@timestamp":"2020-01-27T08:55:47.688Z","logger":"controller-runtime.controller","message":"Starting workers","ver":"1.0.0-beta1-84792e30","controller":"apmserver-controller","worker count":1}
> {"level":"info","@timestamp":"2020-01-27T08:55:47.688Z","logger":"controller-runtime.controller","message":"Starting workers","ver":"1.0.0-beta1-84792e30","controller":"elasticsearch-controller","worker count":1}
> {"level":"info","@timestamp":"2020-01-27T08:55:47.688Z","logger":"annotation","message":"Resource was created with older version of operator, will not take action","ver":"1.0.0-beta1-84792e30","controller_version":"1.0.0-beta1","resource_controller_version":"0.0.0-UNKNOWN","namespace":"default","name":"siem-test-exposed"}
> {"level":"info","@timestamp":"2020-01-27T08:55:47.689Z","logger":"kibana-association-controller","message":"Ending reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":1,"namespace":"default","name":"siem-test-exposed","took":0.000577217}
> {"level":"debug","@timestamp":"2020-01-27T08:55:47.689Z","logger":"controller-runtime.controller","message":"Successfully Reconciled","ver":"1.0.0-beta1-84792e30","controller":"kibana-association-controller","request":"default/siem-test-exposed"}
> {"level":"info","@timestamp":"2020-01-27T08:55:47.688Z","logger":"elasticsearch-controller","message":"Starting reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":1,"namespace":"default","name":"siem-test"}
> {"level":"info","@timestamp":"2020-01-27T08:55:47.689Z","logger":"controller-runtime.controller","message":"Starting workers","ver":"1.0.0-beta1-84792e30","controller":"kibana-controller","worker count":1}
> {"level":"info","@timestamp":"2020-01-27T08:55:47.689Z","logger":"license-controller","message":"Ending reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":1,"namespace":"default","name":"siem-test","took":0.001165524}
> {"level":"info","@timestamp":"2020-01-27T08:55:47.689Z","logger":"annotation","message":"Resource was created with older version of operator, will not take action","ver":"1.0.0-beta1-84792e30","controller_version":"1.0.0-beta1","resource_controller_version":"0.0.0-UNKNOWN","namespace":"default","name":"siem-test"}
> {"level":"info","@timestamp":"2020-01-27T08:55:47.689Z","logger":"elasticsearch-controller","message":"Ending reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":1,"namespace":"default","name":"siem-test","took":0.000539069}
> {"level":"debug","@timestamp":"2020-01-27T08:55:47.689Z","logger":"controller-runtime.controller","message":"Successfully Reconciled","ver":"1.0.0-beta1-84792e30","controller":"elasticsearch-controller","request":"default/siem-test"}
> {"level":"info","@timestamp":"2020-01-27T08:55:47.689Z","logger":"kibana-controller","message":"Starting reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":1,"namespace":"default","name":"siem-test-exposed"}
> {"level":"info","@timestamp":"2020-01-27T08:55:47.689Z","logger":"annotation","message":"Resource was created with older version of operator, will not take action","ver":"1.0.0-beta1-84792e30","controller_version":"1.0.0-beta1","resource_controller_version":"0.0.0-UNKNOWN","namespace":"default","name":"siem-test-exposed"}
> {"level":"info","@timestamp":"2020-01-27T08:55:47.689Z","logger":"kibana-controller","message":"Ending reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":1,"namespace":"default","name":"siem-test-exposed","took":0.00036311}
> {"level":"info","@timestamp":"2020-01-27T08:55:47.689Z","logger":"controller-runtime.controller","message":"Starting workers","ver":"1.0.0-beta1-84792e30","controller":"trial-controller","worker count":1}
> {"level":"debug","@timestamp":"2020-01-27T08:55:47.689Z","logger":"controller-runtime.controller","message":"Successfully Reconciled","ver":"1.0.0-beta1-84792e30","controller":"kibana-controller","request":"default/siem-test-exposed"}
> {"level":"info","@timestamp":"2020-01-27T08:55:47.689Z","logger":"controller-runtime.controller","message":"Starting workers","ver":"1.0.0-beta1-84792e30","controller":"apm-es-association-controller","worker count":1}

I can see the operator thinks it's version is 0.0.0-UNKNOWN but I haven't tried to do any upgrade or something similar.
Maybe it has something to do with how I defined the secureSettings when trying to install the repository-gcs although this error has been there since before I tried to install this plugin and inserting the gcs-credentials?!
I presume the secureSettings option to have something to do with this because I've read a blog post here and this github issue.
Now I'm wondering, if I will update the ECK deployment to the latest version, could this cause a total failure of my cluster and data unrecoverable (I have PVEs but I don't know how could I mount these existing PVEs to a new ES cluster's data nodes so that the existing data will pop-up in the newly created cluster).
Thank you for your support!