Thank you for posting query on Microsoft Q&A Platform.
If you don’t have any Spark pools currently configured in your Synapse Analytics workspace, you won’t be able to "update" a Spark pool directly since there are no existing pools to update. However, you can create a new Spark pool with Apache Spark 3.4 as the version. Here’s how you can proceed:
Steps to Create a New Spark Pool with Apache Spark 3.4:
- Navigate to Synapse Studio:
- Open your Synapse workspace and go to the Manage hub.
- Create a New Apache Spark Pool:
- Under the Apache Spark pools section, click on + New.
- In the Spark version dropdown, select Apache Spark 3.4.
- Configure other settings (node size, autoscale, etc.) as per your requirements.
- Click Create to provision the new Spark pool.
is there any dependency validation required for this?
- No direct dependency validation is required during the creation of the Spark pool itself.
- If you plan to use existing notebooks, pipelines, or jobs that were written for Spark 3.3, you may need to validate and test those with Spark 3.4 to ensure compatibility. Spark version updates can sometimes introduce changes to libraries or behaviors, so reviewing the release notes of Apache Spark 3.4 and testing your workloads is recommended.
Once the pool is created, you can start running workloads on Apache Spark 3.4.
For reference: Apache Spark pools in Azure Synapse use runtimes
Hope this helps. Do let us know if you any further queries.
If this answers your query, do click Accept Answer
and Yes
for was this answer helpful. And, if you have any further query do let us know.