I'm trying to move my elastic stack to 7.16.2 (from 6.8.22) in docker-compose - but fails to get certificates generated

With the Log4J vulnerability I've been tasked to update our quite dated Elasticsearch stack.
We were on 6.5.1, which I moved to 6.8.22 without issues.
Now I want to move it to 7.16.2.
There isn't that much with rolling updates, as this system is used for logging analysis, and its primarily used retrospective, so I've just shut everything down to get this working.
I looked at this guide: Encrypting communications in an Elasticsearch Docker Container | Elasticsearch Guide [7.16] | Elastic
And I can see I need to enable passwords and certificates.
So I modified the docker-compose file, and now I'm trying to get the certificates generated, but so far without luck.
When I run the
docker-compose -f create-certs.yml run --rm create_certs
I get an error:
unzip: cannot find or open /certs/bundle.zip, /certs/bundle.zip.zip or /certs/bundle.zip.ZIP.
So as I see it, it doesn't create the certs.
How do I solve that?

I believe the problem lies with the certutil, but I cannot determine a way to resolve the issue.

[root@046483922bd9 elasticsearch]# bin/elasticsearch-certutil cert --silent --pem --in config/certificates/instances.yml -out /certs/bundle.zip
Exception in thread "main" java.nio.file.NoSuchFileException: /usr/share/elasticsearch/config/certificates/instances.yml
        at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)
        at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106)
        at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
        at java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:219)
        at java.base/java.nio.file.Files.newByteChannel(Files.java:375)
        at java.base/java.nio.file.Files.newByteChannel(Files.java:426)
        at java.base/java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:420)
        at java.base/java.nio.file.Files.newInputStream(Files.java:160)
        at java.base/java.nio.file.Files.newBufferedReader(Files.java:2916)
        at java.base/java.nio.file.Files.newBufferedReader(Files.java:2948)
        at org.elasticsearch.xpack.security.cli.CertificateTool.parseFile(CertificateTool.java:940)
        at org.elasticsearch.xpack.security.cli.CertificateTool.parseAndValidateFile(CertificateTool.java:913)
        at org.elasticsearch.xpack.security.cli.CertificateTool$CertificateCommand.getCertificateInformationList(CertificateTool.java:405)
        at org.elasticsearch.xpack.security.cli.CertificateTool$GenerateCertificateCommand.execute(CertificateTool.java:695)
        at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:75)
        at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:116)
        at org.elasticsearch.cli.MultiCommand.execute(MultiCommand.java:80)
        at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:116)
        at org.elasticsearch.cli.Command.main(Command.java:79)
        at org.elasticsearch.xpack.security.cli.CertificateTool.main(CertificateTool.java:138)

NoSuchFileException: /usr/share/elasticsearch/config/certificates/instances.yml

certutil has some quirks with the way it resolves the current directory. You might need to pass an absolute path to the command line so that it find the correct file.

Hi Tim
Ahaa, interesting, but I don't see a file called instances.yml under /usr/share/elasticsearch/config/certificates so where is it supposed to be? I also tried searching for it, without luck so far.
(I'm pulling the 7.16.2 image for docker in this case).

I created the file and populated with this and it seemed to work. I'm having other non-related issues now so I think the cert creation succeeded.



instances:
  - name: es01
    dns:
      - es01 
      - localhost
    ip:
      - 127.0.0.1

  - name: es02
    dns:
      - es02
      - localhost
    ip:
      - 127.0.0.1


Ok, I got everything configured, and I think I lost all my data :cry: I didn't realize the -v also removed volumes until it was too late.

So now I'm trying to get it up and running to see if the snapshots I have at least contains some data I can restore, especially all the work in kibana would be devastating to loose.

Now, I can connect to each node, and I can see that it has certificates configured, and I also get the 'tagline' shown, so for me it looks like the Elasticsearch in itself is running.

I would like to get my elastichq up and running again, but it doesn't connect to the cluster, not sure how that is. I can ping the URL's from inside the elastichq container, but I just get 'cannot connect'.

I have my backup folder in the hostfile system, and it contains 28GB data, which is not a lot, but it might contain something, but if I go and send a GET _snapshot I just get a {} response.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.