Spark Exception: There is no Credential Scope

864 Views Asked by At

I am new to Databricks and trying to connect to Rstudio Server from my all-purpose compute cluster.

Here are the cluster configuration:

Policy: Personal Compute

Access mode: Single user

Databricks run time version:

13.2 ML (includes Apache Spark 3.4.0, Scala 2.12)

Also unity catalog is configured in our workspace.

Following the instruction here, I am trying to run the codes both with sparlyr and sparkR.

sparklyr

> library(sparklyr)
> sc <- spark_connect(method = "databricks")

However, I receive the below error:

Error in value[[3L]](cond) : 
  Failed to start sparklyr backend: java.util.concurrent.ExecutionException: org.apache.spark.SparkException: There is no Credential Scope. 
    at com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299)
    at com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286)
    at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)
    at com.google.common.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:135)
    at com.google.common.cache.LocalCache$Segment.getAndRecordStats(LocalCache.java:2344)
    at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2316)
    at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2278)
    at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2193)
    at com.google.common.cache.LocalCache.get(LocalCache.java:3932)
    at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3936)
    at com.google.common.cache.LocIn addition: Warning messages:1: In file.create(to[okay]) :  cannot create file '/usr/local/lib/R/site-library/sparklyr/java//sparklyr-2.2-2.11.jar', reason 'Permission denied'2: In file.create(to[okay]) :  cannot create file '/usr/local/lib/R/site-library/sparklyr/java//sparklyr-2.1-2.11.jar', reason 'Permission denied'

sparkr

> library(SparkR) 
> sparkR.session() 
Java ref type org.apache.spark.sql.SparkSession id 1 > df <- SparkR::sql("SELECT * FROM default.diamonds LIMIT 2")

Error Traceback:

Error in handleErrors(returnStatus, conn) : 
  org.apache.spark.sql.AnalysisException: There is no Credential Scope. ; line 1 pos 14
    at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:69)
    at org.apache.spark.sql.execution.datasources.ResolveSQLOnFile$$anonfun$apply$1.applyOrElse(rules.scala:172)
    at org.apache.spark.sql.execution.datasources.ResolveSQLOnFile$$anonfun$apply$1.applyOrElse(rules.scala:94)
    at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDownWithPruning$2(AnalysisHelper.scala:219)
    at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:106)
    at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDownWithPruning$1(AnalysisHelper.scala:219)
    at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:372)
    at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDownWithPruning(AnalysisHelper.scal

I am not an R developer, so cannot really play around with different configurations. I tried to set up a personal authentication token with no luck.

Any help would be appreciated and thanks in advance :)

0

There are 0 best solutions below