Relevance scoring is based on TF*IDF based formulas. That means that at the foundation, three values matter the most in scoring

Term Frequency (TF): how many times does "test" occur here

Inverse Document Frequency (IDF): How rare is "test." Rare terms (low document frequency, or high IDF) recieve a higher score than common ones

Field norms: How short is the text? "test" occuring once in a short snippet is much more important to that snippet than "test" occuring once in a lengthy book.
These numbers are multiplied together to measure how weighty "test" is in the text being scored.
That's just the tip of the iceburg. First there's the fact that TF and IDF and field norms by themselves aren't directly proportional to relevance. So the various "similarities" as their called scale them differently. So instead of taking these numbers directly, TF, IDF, and fieldNorms are computed as
TF score = sqrt(tf)
IDF score = 1 / log( numDocs / (1 + docFreq) )
fieldnorms = 1/sqrt(length)
where
numDocs  total number of docs in collection
length  length of document in terms of positions (depending on if you discount overlaps).
Now there's so many gotchas and caveats here, that I really should just point you at several places to read more about this.
First ,probably the most detailed place to read about this topic as it pertains to Lucene is my relevance book. We dedicate quite a bit of space to the topic
Second, the Lucene & ES community have several well written articles on this topic
Finally, you should know that what's known as "TF*IDF" is being sunsetted as the default scoring computation by something new called BM25 in the next major Lucene version. BM25 is still based on the same statistics, but the computation has been shown experimentally to be far more robust. It's also more complex. I would recommend learning about BM25 at the following places
Hope that's useful! It's really just the beginning of an explanation of a bit of an intricate topic