Upgrading from 6.6.8 to 7.5.0 having all sorts of problems - Please help

Right now my PRODUCTION instance of this is down... Any help would be greatly appreciated.

I'm running with docker containers on separate vms. I inherited it and I have upgraded it in the past from 5x to 6.6.1. I just upgraded to 6.6.8 and then I ran into issues going to 7.5.0
This is a one node cluster...

elasticsearch.docker-compose.yaml

version  : '3'
networks :
  elasticnet : { }
services :
  elasticsearch :
    image : docker.elastic.co/elasticsearch/elasticsearch:7.5.0
    ulimits :
      memlock :
        soft : -1
        hard : -1
    volumes :
      - /opt/ncc/etc/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
      - /opt/ncc/etc/elasticsearch.role_mapping.yml:/usr/share/elasticsearch/config/x-pack/role_mapping.yml
      - /data/es:/usr/share/elasticsearch/data/
      - /opt/ncc/certs/elk:/usr/share/elasticsearch/config/certs
      - /backup:/usr/share/elasticsearch/backup
    ports :
      - 443:9200
    networks :
      - elasticnet
    environment :
      - ELASTIC_PASSWORD=<password>
      - bootstrap.memory_lock=true
      - cluster.initial_master_nodes=elasticsearch
      - "ES_JAVA_OPTS=-Xms10g -Xmx10g"

I have to admit I'm having a hard time understanding what changes need to be made to this yml file based on the documentation...
elasticsearch.yml:

cluster.name                       : logging
network.host                       : 0.0.0.0
#discovery.zen.minimum_master_nodes : 1
#xpack.license.self_generated.type  : trial
discovery.zen.ping.unicast.hosts: ["0.0.0.0"]
# Added repo for snapshots 2019/12/04
path.repo: ["/usr/share/elasticsearch/backup"]

xpack.security.enabled: true
xpack.security.http.ssl.enabled : true
xpack.security.http.ssl.certificate :  /usr/share/elasticsearch/config/certs/service.crt
xpack.security.http.ssl.key :  /usr/share/elasticsearch/config/certs/service.key
xpack.security.transport.ssl.enabled : true
xpack.security.transport.ssl.verification_mode : certificate
xpack.security.transport.ssl.certificate_authorities : /usr/share/elasticsearch/config/certs/ca.crt

xpack:
  security:
    authc:
      realms:
        ldap.ldap1:
          order         : 0
          url           : "ldaps://ldap.jumpcloud.com:636"
          bind_dn       : "uid=application.bind,ou=Users,o=xx,dc=jumpcloud,dc=com"
          bind_password : "pwd"
          user_search :
            base_dn : "ou=Users,o=xx,dc=jumpcloud,dc=com"
          group_search :
            base_dn : "ou=Users,o=xx,dc=jumpcloud,dc=com"
          files:
            role_mapping : /usr/share/elasticsearch/config/x-pack/role_mapping.yml
          unmapped_groups_as_roles : false
          ssl.verification_mode    : none

Here is the error I'm getting. I'm truly lost...

{"type": "server", "timestamp": "2019-12-11T04:37:52,653Z", "level": "WARN", "component": "o.e.c.c.ClusterFormationFailureHelper", "cluster.name": "logging", "node.name": "18b0f317d9c9", "message": "master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [elasticsearch] to bootstrap a cluster: have discovered [{18b0f317d9c9}{05w2lDl6RMyhX60gts6pFw}{NZ_NXQWHTmaTaJigYNhtvg}{192.168.240.2}{192.168.240.2:9300}{dilm}{ml.machine_memory=59087360000, xpack.installed=true, ml.max_open_jobs=20}]; discovery will continue using from hosts providers and [{18b0f317d9c9}{05w2lDl6RMyhX60gts6pFw}{NZ_NXQWHTmaTaJigYNhtvg}{192.168.240.2}{192.168.240.2:9300}{dilm}{ml.machine_memory=59087360000, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 21052 in term 0" }

Since this is a one-node cluster you should set cluster.initial_master_nodes: ["18b0f317d9c9"] since this node appears to be named 18b0f317d9c9.

This is mentioned in the upgrade instructions:

If upgrading from a 6.x cluster, you must also configure cluster bootstrapping by setting the cluster.initial_master_nodes setting on the master-eligible nodes.

Hi David,

Thanks for your response! I'm not sure why that node is named that. I did add it as you suggested. And then I added node names that came up in subsequent errors. I guess what I'm now struggling with is the x-pack configurations. Is there a way I can start this up sans x-pack? Every time I comment it out I seem to get tons of errors. I guess the first thing is to get the master node problem solved:

#  curl -k -u elastic -XGET 'https://localhost/_cluster/health?pretty'
Enter host password for user 'elastic':
{
  "error" : {
    "root_cause" : [
      {
        "type" : "master_not_discovered_exception",
        "reason" : null
      }
    ],
    "type" : "master_not_discovered_exception",
    "reason" : null
  },
  "status" : 503
}

Here is a log:

Created elasticsearch keystore in /usr/share/elasticsearch/config
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
{"type": "server", "timestamp": "2019-12-11T16:46:12,369Z", "level": "INFO", "component": "o.e.e.NodeEnvironment", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/sdc1)]], net usable_space [765.3gb], net total_space [1007.8gb], types [ext4]" }
{"type": "server", "timestamp": "2019-12-11T16:46:12,374Z", "level": "INFO", "component": "o.e.e.NodeEnvironment", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "heap size [9.9gb], compressed ordinary object pointers [true]" }
{"type": "server", "timestamp": "2019-12-11T16:46:13,454Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "node name [ebb0587dfe36], node ID [05w2lDl6RMyhX60gts6pFw], cluster name [logging]" }
{"type": "server", "timestamp": "2019-12-11T16:46:13,455Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "version[7.5.0], pid[1], build[default/docker/e9ccaed468e2fac2275a3761849cbee64b39519f/2019-11-26T01:06:52.518245Z], OS[Linux/3.10.0-862.14.4.el7.x86_64/amd64], JVM[AdoptOpenJDK/OpenJDK 64-Bit Server VM/13.0.1/13.0.1+9]" }
{"type": "server", "timestamp": "2019-12-11T16:46:13,455Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "JVM home [/usr/share/elasticsearch/jdk]" }
{"type": "server", "timestamp": "2019-12-11T16:46:13,455Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "JVM arguments [-Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=COMPAT, -Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.io.tmpdir=/tmp/elasticsearch-5208846055925621279, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Des.cgroups.hierarchy.override=/, -Xms10g, -Xmx10g, -XX:MaxDirectMemorySize=5368709120, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=default, -Des.distribution.type=docker, -Des.bundled_jdk=true]" }
{"type": "server", "timestamp": "2019-12-11T16:46:15,619Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "loaded module [aggs-matrix-stats]" }
{"type": "server", "timestamp": "2019-12-11T16:46:15,619Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "loaded module [analysis-common]" }
{"type": "server", "timestamp": "2019-12-11T16:46:15,620Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "loaded module [flattened]" }
{"type": "server", "timestamp": "2019-12-11T16:46:15,620Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "loaded module [frozen-indices]" }
{"type": "server", "timestamp": "2019-12-11T16:46:15,620Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "loaded module [ingest-common]" }
{"type": "server", "timestamp": "2019-12-11T16:46:15,620Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "loaded module [ingest-geoip]" }
{"type": "server", "timestamp": "2019-12-11T16:46:15,621Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "loaded module [ingest-user-agent]" }
{"type": "server", "timestamp": "2019-12-11T16:46:15,621Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "loaded module [lang-expression]" }
{"type": "server", "timestamp": "2019-12-11T16:46:15,621Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "loaded module [lang-mustache]" }
{"type": "server", "timestamp": "2019-12-11T16:46:15,621Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "loaded module [lang-painless]" }
{"type": "server", "timestamp": "2019-12-11T16:46:15,621Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "loaded module [mapper-extras]" }
{"type": "server", "timestamp": "2019-12-11T16:46:15,622Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "loaded module [parent-join]" }
{"type": "server", "timestamp": "2019-12-11T16:46:15,622Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "loaded module [percolator]" }
{"type": "server", "timestamp": "2019-12-11T16:46:15,622Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "loaded module [rank-eval]" }
{"type": "server", "timestamp": "2019-12-11T16:46:15,622Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "loaded module [reindex]" }
{"type": "server", "timestamp": "2019-12-11T16:46:15,622Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "loaded module [repository-url]" }
{"type": "server", "timestamp": "2019-12-11T16:46:15,623Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "loaded module [search-business-rules]" }
{"type": "server", "timestamp": "2019-12-11T16:46:15,623Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "loaded module [spatial]" }
{"type": "server", "timestamp": "2019-12-11T16:46:15,623Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "loaded module [transform]" }
{"type": "server", "timestamp": "2019-12-11T16:46:15,623Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "loaded module [transport-netty4]" }
{"type": "server", "timestamp": "2019-12-11T16:46:15,624Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "loaded module [vectors]" }
{"type": "server", "timestamp": "2019-12-11T16:46:15,624Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "loaded module [x-pack-analytics]" }
{"type": "server", "timestamp": "2019-12-11T16:46:15,624Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "loaded module [x-pack-ccr]" }
{"type": "server", "timestamp": "2019-12-11T16:46:15,624Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "loaded module [x-pack-core]" }
{"type": "server", "timestamp": "2019-12-11T16:46:15,624Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "loaded module [x-pack-deprecation]" }
{"type": "server", "timestamp": "2019-12-11T16:46:15,625Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "loaded module [x-pack-enrich]" }
{"type": "server", "timestamp": "2019-12-11T16:46:15,625Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "loaded module [x-pack-graph]" }
{"type": "server", "timestamp": "2019-12-11T16:46:15,625Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "loaded module [x-pack-ilm]" }
{"type": "server", "timestamp": "2019-12-11T16:46:15,625Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "loaded module [x-pack-logstash]" }
{"type": "server", "timestamp": "2019-12-11T16:46:15,626Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "loaded module [x-pack-ml]" }
{"type": "server", "timestamp": "2019-12-11T16:46:15,626Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "loaded module [x-pack-monitoring]" }
{"type": "server", "timestamp": "2019-12-11T16:46:15,626Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "loaded module [x-pack-rollup]" }
{"type": "server", "timestamp": "2019-12-11T16:46:15,626Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "loaded module [x-pack-security]" }
{"type": "server", "timestamp": "2019-12-11T16:46:15,626Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "loaded module [x-pack-sql]" }
{"type": "server", "timestamp": "2019-12-11T16:46:15,627Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "loaded module [x-pack-voting-only-node]" }
{"type": "server", "timestamp": "2019-12-11T16:46:15,627Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "loaded module [x-pack-watcher]" }
{"type": "server", "timestamp": "2019-12-11T16:46:15,627Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "no plugins loaded" }
{"type": "server", "timestamp": "2019-12-11T16:46:19,237Z", "level": "INFO", "component": "o.e.x.s.a.l.LdapUserSearchSessionFactory", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "Realm [ldap1] is in user-search mode - base_dn=[ou=Users,o=54899d1f318ab54f7100d8f0,dc=jumpcloud,dc=com], search filter=[(uid={0})]" }
{"type": "server", "timestamp": "2019-12-11T16:46:19,274Z", "level": "INFO", "component": "o.e.x.s.a.s.FileRolesStore", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "parsed [0] roles from file [/usr/share/elasticsearch/config/roles.yml]" }

more log:

{"type": "server", "timestamp": "2019-12-11T16:46:19,818Z", "level": "INFO", "component": "o.e.x.m.p.l.CppLogMessageHandler", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "[controller/263] [Main.cc@110] controller (64 bit): Version 7.5.0 (Build 17d1c724ca38a1) Copyright (c) 2019 Elasticsearch BV" }
{"type": "server", "timestamp": "2019-12-11T16:46:20,396Z", "level": "DEBUG", "component": "o.e.a.ActionModule", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "Using REST wrapper from plugin org.elasticsearch.xpack.security.Security" }
{"type": "server", "timestamp": "2019-12-11T16:46:20,547Z", "level": "INFO", "component": "o.e.d.DiscoveryModule", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "using discovery type [zen] and seed hosts providers [settings]" }
{"type": "server", "timestamp": "2019-12-11T16:46:21,484Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "initialized" }
{"type": "server", "timestamp": "2019-12-11T16:46:21,484Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "starting ..." }
{"type": "server", "timestamp": "2019-12-11T16:46:21,692Z", "level": "INFO", "component": "o.e.t.TransportService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "publish_address {172.23.0.2:9300}, bound_addresses {0.0.0.0:9300}" }
{"type": "server", "timestamp": "2019-12-11T16:46:22,412Z", "level": "INFO", "component": "o.e.b.BootstrapChecks", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "bound or publishing to a non-loopback address, enforcing bootstrap checks" }
{"type": "server", "timestamp": "2019-12-11T16:46:22,424Z", "level": "INFO", "component": "o.e.c.c.ClusterBootstrapService", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "skipping cluster bootstrapping as local node does not match bootstrap requirements: [elasticsearch]" }
{"type": "server", "timestamp": "2019-12-11T16:46:22,657Z", "level": "WARN", "component": "o.e.t.TcpTransport", "cluster.name": "logging", "node.name": "ebb0587dfe36", "message": "exception caught on transport layer [Netty4TcpChannel{localAddress=/127.0.0.1:9300, remoteAddress=/127.0.0.1:41632}], closing connection",
"stacktrace": ["io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: No available authentication scheme",
"at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:473) ~[netty-codec-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:281) ~[netty-codec-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-transport-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1422) [netty-transport-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:931) [netty-transport-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:700) [netty-transport-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:600) [netty-transport-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:554) [netty-transport-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:514) [netty-transport-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1050) [netty-common-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.43.Final.jar:4.1.43.Final]",
"at java.lang.Thread.run(Thread.java:830) [?:?]",
"Caused by: javax.net.ssl.SSLHandshakeException: No available authentication scheme",
"at sun.security.ssl.Alert.createSSLException(Alert.java:131) ~[?:?]",
"at sun.security.ssl.Alert.createSSLException(Alert.java:117) ~[?:?]",
"at sun.security.ssl.TransportContext.fatal(TransportContext.java:311) ~[?:?]",
"at sun.security.ssl.TransportContext.fatal(TransportContext.java:267) ~[?:?]",
"at sun.security.ssl.TransportContext.fatal(TransportContext.java:258) ~[?:?]",
"at sun.security.ssl.CertificateMessage$T13CertificateProducer.onProduceCertificate(CertificateMessage.java:951) ~[?:?]",
"at sun.security.ssl.CertificateMessage$T13CertificateProducer.produce(CertificateMessage.java:940) ~[?:?]",
"at sun.security.ssl.SSLHandshake.produce(SSLHandshake.java:440) ~[?:?]",
"at sun.security.ssl.ClientHello$T13ClientHelloConsumer.goServerHello(ClientHello.java:1243) ~[?:?]",
"at sun.security.ssl.ClientHello$T13ClientHelloConsumer.consume(ClientHello.java:1179) ~[?:?]",
"at sun.security.ssl.ClientHello$ClientHelloConsumer.onClientHello(ClientHello.java:851) ~[?:?]",
"at sun.security.ssl.ClientHello$ClientHelloConsumer.consume(ClientHello.java:812) ~[?:?]",
"at sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:396) ~[?:?]",
"at sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:444) ~[?:?]",
"at sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1260) ~[?:?]",
"at sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1247) ~[?:?]",
"at java.security.AccessController.doPrivileged(AccessController.java:691) ~[?:?]",
"at sun.security.ssl.SSLEngineImpl$DelegatedTask.run(SSLEngineImpl.java:1192) ~[?:?]",
"at io.netty.handler.ssl.SslHandler.runAllDelegatedTasks(SslHandler.java:1502) ~[netty-handler-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.handler.ssl.SslHandler.runDelegatedTasks(SslHandler.java:1516) ~[netty-handler-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1400) ~[netty-handler-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1227) ~[netty-handler-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1274) ~[netty-handler-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:503) ~[netty-codec-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:442) ~[netty-codec-4.1.43.Final.jar:4.1.43.Final]",
"... 16 more"] }

elasticsearch.yml

cluster.name                       : logging
network.host                       : 0.0.0.0
#discovery.zen.minimum_master_nodes : 1
#xpack.license.self_generated.type  : trial
#discovery.zen.ping.unicast.hosts: ["0.0.0.0"]

cluster.initial_master_nodes: ["18b0f317d9c9","ebb0587dfe36","308667227fc6","1996b09cab71"]
# Added repo for snapshots 2019/12/04
path.repo: ["/usr/share/elasticsearch/backup"]
node.master: true
xpack.security.enabled: true
xpack.security.http.ssl.enabled : true
xpack.security.http.ssl.certificate :  /usr/share/elasticsearch/config/certs/service.crt
xpack.security.http.ssl.key :  /usr/share/elasticsearch/config/certs/service.key
xpack.security.transport.ssl.enabled : true
xpack.security.transport.ssl.verification_mode : certificate
xpack.security.transport.ssl.certificate_authorities : /usr/share/elasticsearch/config/certs/ca.crt

xpack:
  security:
    authc:
      realms:
        ldap.ldap1:
          order         : 0
          url           : "ldaps://ldap.jumpcloud.com:636"
          bind_dn       : "uid=application.bind,ou=Users,o=54899d1f318ab54f7100d8f0,dc=jumpcloud,dc=com"
#          bind_password : "pwd"
          user_search :
            base_dn : "ou=Users,o=54899d1f318ab54f7100d8f0,dc=jumpcloud,dc=com"
          group_search :
            base_dn : "ou=Users,o=54899d1f318ab54f7100d8f0,dc=jumpcloud,dc=com"
          files:
            role_mapping : /usr/share/elasticsearch/config/x-pack/role_mapping.yml
          unmapped_groups_as_roles : false
          ssl.verification_mode    : none

Yes, getting the master elected is definitely the top priority here. You say you have set cluster.initial_master_nodes to the node name but that doesn't seem to be the case:

The node name is now ebb0587dfe36 and cluster.initial_master_nodes seems to be set to the string elasticsearch.

Every time I add a node to the cluster.initial_master_nodes, a new one shows up. I'm sure I'm missing something. AFAIK, there was one node. so, yes I added the one you mentioned and then I kept adding other ones, but it keeps responding with more nodes. so my guess is this isn't quite the issue.

cluster.name                       : logging
network.host                       : 0.0.0.0
cluster.initial_master_nodes: ["18b0f317d9c9","ebb0587dfe36","308667227fc6","1996b09cab71","elasticsearch","e1164d22fa20"]
node.master: true

node.name defaults to the hostname which seems to be rather unstable. I'm guessing that's a Docker feature. I think in this case it's best to set node.name explicitly to override the default, and then set cluster.initial_master_nodes to the same thing.

I'm not seeing the same type of errors in the log, but the cluster status is still complaining.

#  curl -k -u elastic -XGET 'https://localhost/_cluster/health?pretty'
Enter host password for user 'elastic':
{
  "error" : {
    "root_cause" : [
      {
        "type" : "master_not_discovered_exception",
        "reason" : null
      }
    ],
    "type" : "master_not_discovered_exception",
    "reason" : null
  },
  "status" : 503
}
{"type": "server", "timestamp": "2019-12-11T19:29:51,579Z", "level": "INFO", "component": "o.e.x.s.a.s.FileRolesStore", "cluster.name": "logging", "node.name": "[elasticsearch]", "message": "parsed [0] roles from file [/usr/share/elasticsearch/config/roles.yml]" }
{"type": "server", "timestamp": "2019-12-11T19:29:52,090Z", "level": "INFO", "component": "o.e.x.m.p.l.CppLogMessageHandler", "cluster.name": "logging", "node.name": "[elasticsearch]", "message": "[controller/261] [Main.cc@110] controller (64 bit): Version 7.5.0 (Build 17d1c724ca38a1) Copyright (c) 2019 Elasticsearch BV" }
{"type": "server", "timestamp": "2019-12-11T19:29:52,647Z", "level": "DEBUG", "component": "o.e.a.ActionModule", "cluster.name": "logging", "node.name": "[elasticsearch]", "message": "Using REST wrapper from plugin org.elasticsearch.xpack.security.Security" }
{"type": "server", "timestamp": "2019-12-11T19:29:52,791Z", "level": "INFO", "component": "o.e.d.DiscoveryModule", "cluster.name": "logging", "node.name": "[elasticsearch]", "message": "using discovery type [zen] and seed hosts providers [settings]" }
{"type": "server", "timestamp": "2019-12-11T19:29:53,695Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "logging", "node.name": "[elasticsearch]", "message": "initialized" }
{"type": "server", "timestamp": "2019-12-11T19:29:53,695Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "logging", "node.name": "[elasticsearch]", "message": "starting ..." }
{"type": "server", "timestamp": "2019-12-11T19:29:53,843Z", "level": "INFO", "component": "o.e.t.TransportService", "cluster.name": "logging", "node.name": "[elasticsearch]", "message": "publish_address {192.168.0.2:9300}, bound_addresses {0.0.0.0:9300}" }
{"type": "server", "timestamp": "2019-12-11T19:29:54,601Z", "level": "INFO", "component": "o.e.b.BootstrapChecks", "cluster.name": "logging", "node.name": "[elasticsearch]", "message": "bound or publishing to a non-loopback address, enforcing bootstrap checks" }
{"type": "server", "timestamp": "2019-12-11T19:29:54,613Z", "level": "INFO", "component": "o.e.c.c.ClusterBootstrapService", "cluster.name": "logging", "node.name": "[elasticsearch]", "message": "skipping cluster bootstrapping as local node does not match bootstrap requirements: [elasticsearch]" }
{"type": "server", "timestamp": "2019-12-11T19:29:54,828Z", "level": "WARN", "component": "o.e.t.TcpTransport", "cluster.name": "logging", "node.name": "[elasticsearch]", "message": "exception caught on transport layer [Netty4TcpChannel{localAddress=/127.0.0.1:9300, remoteAddress=/127.0.0.1:51276}], closing connection",
{"type": "server", "timestamp": "2019-12-11T19:29:54,828Z", "level": "WARN", "component": "o.e.t.TcpTransport", "cluster.name": "logging", "node.name": "[elasticsearch]", "message": "exception caught on transport layer [Netty4TcpChannel{localAddress=/127.0.0.1:9300, remoteAddress=/127.0.0.1:51276}], closing connection",
"stacktrace": ["io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: No available authentication scheme",
"at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:473) ~[netty-codec-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:281) ~[netty-codec-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-transport-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1422) [netty-transport-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:931) [netty-transport-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:700) [netty-transport-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:600) [netty-transport-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:554) [netty-transport-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:514) [netty-transport-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1050) [netty-common-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.43.Final.jar:4.1.43.Final]",
"at java.lang.Thread.run(Thread.java:830) [?:?]",
"Caused by: javax.net.ssl.SSLHandshakeException: No available authentication scheme",
"at sun.security.ssl.Alert.createSSLException(Alert.java:131) ~[?:?]",
"at sun.security.ssl.Alert.createSSLException(Alert.java:117) ~[?:?]",
"at sun.security.ssl.TransportContext.fatal(TransportContext.java:311) ~[?:?]",
"at sun.security.ssl.TransportContext.fatal(TransportContext.java:267) ~[?:?]",
"at sun.security.ssl.TransportContext.fatal(TransportContext.java:258) ~[?:?]",
"at sun.security.ssl.CertificateMessage$T13CertificateProducer.onProduceCertificate(CertificateMessage.java:951) ~[?:?]",
"at sun.security.ssl.CertificateMessage$T13CertificateProducer.produce(CertificateMessage.java:940) ~[?:?]",
"at sun.security.ssl.SSLHandshake.produce(SSLHandshake.java:440) ~[?:?]",
"at sun.security.ssl.ClientHello$T13ClientHelloConsumer.goServerHello(ClientHello.java:1243) ~[?:?]",
"at sun.security.ssl.ClientHello$T13ClientHelloConsumer.consume(ClientHello.java:1179) ~[?:?]",
"at sun.security.ssl.ClientHello$ClientHelloConsumer.onClientHello(ClientHello.java:851) ~[?:?]",
"at sun.security.ssl.ClientHello$ClientHelloConsumer.consume(ClientHello.java:812) ~[?:?]",
"at sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:396) ~[?:?]",
"at sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:444) ~[?:?]",
"at sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1260) ~[?:?]",
"at sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1247) ~[?:?]",
"at java.security.AccessController.doPrivileged(AccessController.java:691) ~[?:?]",
"at sun.security.ssl.SSLEngineImpl$DelegatedTask.run(SSLEngineImpl.java:1192) ~[?:?]",
"at io.netty.handler.ssl.SslHandler.runAllDelegatedTasks(SslHandler.java:1502) ~[netty-handler-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.handler.ssl.SslHandler.runDelegatedTasks(SslHandler.java:1516) ~[netty-handler-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1400) ~[netty-handler-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1227) ~[netty-handler-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1274) ~[netty-handler-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:503) ~[netty-codec-4.1.43.Final.jar:4.1.43.Final]",
"at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:442) ~[netty-codec-4.1.43.Final.jar:4.1.43.Final]",
"... 16 more"] }

node.name still doesn't match cluster.initial_master_nodes. It looks like you have done something like this:

node.name: "[elasticsearch]"
cluster.initial_master_nodes: ["elasticsearch"]

I think you want this instead:

node.name: "elasticsearch"
cluster.initial_master_nodes: ["elasticsearch"]

Well... I did try the node.name: "elasticsearch" already, but I might have missed the space after the ';'... so instead I did node.name: ["elasticsearch"] I have no idea why it looked like "[elasticsearch]" in the log. So once I reverted back to node.name: "elasticsearch" the cluster status is all good.

Thank you for sticking with me. I really appreciate it.

So here is my current elasticsearch.yml but I know there are x-pack changes I need to make. I'm not sure what works for 7.5. Should I just undo it and start from scratch with x-pack. I tried commenting out all the x-pack settings but then the cluster wouldn't come up.

cluster.name                       : logging
network.host                       : 0.0.0.0
node.name: "elasticsearch"
cluster.initial_master_nodes: ["elasticsearch"]
node.master: true
# Added repo for snapshots 2019/12/04
path.repo: ["/usr/share/elasticsearch/backup"]

xpack.security.enabled: true
xpack.security.http.ssl.enabled : true
xpack.security.http.ssl.certificate :  /usr/share/elasticsearch/config/certs/service.crt
xpack.security.http.ssl.key :  /usr/share/elasticsearch/config/certs/service.key
xpack.security.transport.ssl.enabled : true
xpack.security.transport.ssl.verification_mode : certificate
xpack.security.transport.ssl.certificate_authorities : /usr/share/elasticsearch/config/certs/ca.crt

xpack:
  security:
    authc:
      realms:
        ldap.ldap1:
          order         : 0
          url           : "ldaps://ldap.jumpcloud.com:636"
          bind_dn       : "uid=application.bind,ou=Users,o=54899d1f318ab54f7100d8f0,dc=jumpcloud,dc=com"
#          bind_password : "pwd"
          user_search :
            base_dn : "ou=Users,o=54899d1f318ab54f7100d8f0,dc=jumpcloud,dc=com"
          group_search :
            base_dn : "ou=Users,o=54899d1f318ab54f7100d8f0,dc=jumpcloud,dc=com"
          files:
            role_mapping : /usr/share/elasticsearch/config/x-pack/role_mapping.yml
          unmapped_groups_as_roles : false
          ssl.verification_mode    : none

I'm not sure what other changes you need to make here. Is it working as you expect? If so, great; if not then what's the problem? In any case, I'm not the best person to help with these xpack.security settings so I'd recommend starting another thread focussing on your issues with those.

Ok! Thanks. I’m getting connection errors at the moment and so I have to redo that part. I’ll start another thread. I really appreciate the help.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.