We're attempting to run the snowflake query with Pyspark, and we've set numPartitions to 10 and submitted a spark query. However, when I checked the Snowflake History tab. As far as I can tell, only one query is being executed rather than ten.
Is the numPartitions clause supported by snowflake -Spark? The sample code we used to execute is shown below.
sfOptions = dict()
sfOptions["url"] ="jdbc:snowflake://**************.privatelink.snowflakecomputing.com"
sfOptions["user"] ="**01d"
sfOptions["private_key_file"] = key_file
sfOptions["private_key_file_pwd"] = key_passphrase
sfOptions["db"] ="**_DB"
sfOptions["warehouse"] ="****_WHS"
sfOptions["schema"] ="***_SHR"
sfOptions["role"] ="**_ROLE"
sfOptions["numPartitions"]="10"
sfOptions["partitionColumn"] = "***_TRANS_ID"
sfOptions["lowerBound"] = lowerbound
sfOptions["upperBound"] = upperbound
print(sfOptions)
df = spark.read.format('jdbc') \
.options(**sfOptions) \
.option("query", "select * from ***_shr.SPRK_TST as f") \
.load()