Can't log in anymore after disk full

Hi,

Running ES 5.2.0 with Kibana 5.2.0 and Logstash 5.2.0 and X-PACK as well. I'm on a trial and I'm testing out all the new features but I have let my indices grow out of control and the data nodes have no free space. A unique detail about my setup is that I have everything setup in Kubernetes so the directory used is writing to an emptyDir, meaning if I restart the data node container the data will be gone.

I think this has caused me to be unable to log into the ES cluster. I can still log into kibana though. Is there something I can do without having to reinstall everything? This isn't production data or anything critical so I don't mind rebuilding, but it's just a hassle.

Thanks!

Tony

Just delete the indices via the ES APIs and you should be good.

The problem is I can't because I'm not authorized to do so, because the credentials dont' work.

Can you provide more details about the error you get when trying to login.

  • What user are you using?
  • Which realm is that user authenticated against (native, file, LDAP, etc)
  • What output do you get when you run this? (you might need to change the URL or username to match your environment)
       curl -u elastic 'http://localhost:9200/?error_trace=true'

Sure.

  • What user are you using?
    elastic
  • Which realm is that user authenticated against (native, file, LDAP, etc)
    Native? I'm not sure, but I think it is.

{"error":{"root_cause":[{"type":"security_exception","reason":"failed to authenticate user [elastic]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"failed to authenticate user [elastic]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}

  • What output do you get when you run this? (you might need to change the URL or username to match your environment)
    I get this:
    Enter host password for user 'elastic':

Enter host password for user 'elastic':

And what output do you get after you enter the password?

I ended up re-provisioning my cluster. In all honesty, this was way to difficult to troubleshoot so I'm trying out the cloud solution.