[KYUUBI #6876] Fix hadoopConf for autoCreateFileUploadPath
### Why are the changes needed? This change fixes two issues: 1. `KyuubiHadoopUtils.newHadoopConf` should `loadDefaults`, otherwise `core-site.xml`, `hdfs-site.xml` won't take effect. 2. To make it aware of Hadoop conf hot reload, we should use `KyuubiServer.getHadoopConf()`. ### How was this patch tested? Manual test that `core-site.xml` takes affect, previously not. ### Was this patch authored or co-authored using generative AI tooling? No. Closes #7102 from pan3793/6876-followup. Closes #6876 24989d688 [Cheng Pan] [KYUUBI #6876] Fix hadoopConf for autoCreateFileUploadPath Authored-by: Cheng Pan <chengpan@apache.org> Signed-off-by: Cheng Pan <chengpan@apache.org>
This commit is contained in:
parent
4e40f9457d
commit
efe9f5552c
@ -41,7 +41,8 @@ import org.apache.kyuubi.ha.HighAvailabilityConf
|
||||
import org.apache.kyuubi.ha.HighAvailabilityConf.HA_ZK_ENGINE_AUTH_TYPE
|
||||
import org.apache.kyuubi.ha.client.AuthTypes
|
||||
import org.apache.kyuubi.operation.log.OperationLog
|
||||
import org.apache.kyuubi.util.{JavaUtils, KubernetesUtils, KyuubiHadoopUtils, Validator}
|
||||
import org.apache.kyuubi.server.KyuubiServer
|
||||
import org.apache.kyuubi.util.{JavaUtils, KubernetesUtils, Validator}
|
||||
import org.apache.kyuubi.util.command.CommandLineUtils._
|
||||
|
||||
class SparkProcessBuilder(
|
||||
@ -287,7 +288,7 @@ class SparkProcessBuilder(
|
||||
// Create the `uploadPath` using permission 777, otherwise, spark just creates the
|
||||
// `$uploadPath/spark-upload-$uuid` using default permission 511, which might prevent
|
||||
// other users from creating the staging dir under `uploadPath` later.
|
||||
val hadoopConf = KyuubiHadoopUtils.newHadoopConf(conf, loadDefaults = false)
|
||||
val hadoopConf = KyuubiServer.getHadoopConf()
|
||||
val path = new Path(uploadPath)
|
||||
var fs: FileSystem = null
|
||||
try {
|
||||
|
||||
Loading…
Reference in New Issue
Block a user