Multi nodes cluster only finds one node

I installed Elasticsearch version 8.15. I set up a cluster with two nodes. I have configured in elasticsearch.yml like this:

discovery.seed_hosts: ["cvnode1","cvnode2"]

But, when I checked cluster health, I got one node.

curl -X GET -k -u <username>:<password>
'https://localhost:9200/_cluster/health?pretty'
{
  "cluster_name" : "ElasticSearch",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 255,
  "active_shards" : 255,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 188,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 57.56207674943566
}

How do I add the other node?
Thanks

I think there'll be information in the logs to help you here - look for a message that starts This node is a fully-formed single-node cluster.

Yes, you are right. I see this message in log.

[cvnode1] This node is a fully-formed single-node cluster with cluster UUID [vtvB0VheSSusx4MyuRy74g], but it is configured as if to discover other nodes and form a multi-node cluster via the [discovery.seed_hosts=[cvnode1, cvnode2]] setting. Fully-formed clusters do not attempt to discover other nodes, and nodes with different cluster UUIDs cannot belong to the same cluster. The cluster UUID persists across restarts and can only be changed by deleting the contents of the node's data path(s). Remove the discovery configuration to suppress this message.
[cvnode1] failed to validate incoming join request from node [{cvnode2}{jkx38DGkQcqlcpxZy6Ih2w}{wcB_0vAdSlKtfszgsTxMgA}{cvnode2}{<IP>}{<IP>:9300}{dimrs}{8.15.0}{7000099-8512000}{ml.config_version=12.0.0, xpack.installed=true, transform.config_version=10.0.0}]
org.elasticsearch.transport.RemoteTransportException: [cvnode2][<IP>:9300][internal:cluster/coordination/join/validate]
Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: This node previously joined a cluster with UUID [nTodcM8ER6aQhXegTujWWA] and is now trying to join a different cluster with UUID [vtvB0VheSSusx4MyuRy74g]. This is forbidden and usually indicates an incorrect discovery or cluster bootstrapping configuration. Note that the cluster UUID persists across restarts and can only be changed by deleting the contents of the node's data path [/var/lib/elasticsearch] which will also remove any data held by this node.

Cluster could discover two nodes, but after I upgrade the ELK system, cluster cannot find the second node.

Looks like you forgot to remove cluster.initial_master_nodes and formed a new cluster. See also these docs.

Thank you. I have fixed the issue by removing the data under path.data.