Snowflake -Pyspark numPartitions support

We're attempting to run the snowflake query with Pyspark, and we've set numPartitions to 10 and submitted a spark query. However, when I checked the Snowflake History tab. As far as I can tell, only one query is being executed rather than ten.

Is the numPartitions clause supported by snowflake -Spark? The sample code we used to execute is shown below.

sfOptions = dict()
sfOptions["url"] ="jdbc:snowflake://**************"
sfOptions["user"] ="**01d"
sfOptions["private_key_file"] = key_file
sfOptions["private_key_file_pwd"] = key_passphrase
sfOptions["db"] ="**_DB"
sfOptions["warehouse"] ="****_WHS"
sfOptions["schema"] ="***_SHR"
sfOptions["role"] ="**_ROLE"
sfOptions["partitionColumn"] = "***_TRANS_ID"
sfOptions["lowerBound"] = lowerbound
sfOptions["upperBound"] = upperbound


df ='jdbc') \
    .options(**sfOptions) \
    .option("query", "select * from ***_shr.SPRK_TST as f") \

How is this related to Elasticsearch or the Elastic Stack?

1 Like

Welcome to our community! :smiley:

As Christian mentions, this isn't related to the Elastic Stack and we cannot help sorry. You will need to find a snowflake community and ask there.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.