any way to reach more professionals through this post?
load data into SQL Server via Pyspark.
Hi everyone, I don't know what else to do. I tested it with several cluster versions. Need help.
thanks.
Error return in the description below:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 31 in stage 1.0 failed 4 times, most recent failure: Lost task 31.3 in stage 1.0 (TID 109) (10.139.64.9 executor 8): com.microsoft.sqlserver.jdbc.SQLServerException: Database 'dw' on server 'dw' is not currently available. Please retry the connection later. If the problem persists, contact customer support, and provide them the session tracing ID of '27977670-F9B7-42AF-8F1F-3E3EB972FA65'