# 🔍 Description ## Issue References 🔗 This pull request fixes #6437 ## Describe Your Solution 🔧 Use `org.apache.hadoop.fs.Path` instead of `java.nio.file.Paths` to avoid `OPERATION_RESULT_SAVE_TO_FILE_DIR` scheme unexpected change. ## Types of changes 🔖 - [x] Bugfix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 #### Behavior Without This Pull Request ⚰️ Spark Job failed to start with error: `java.io.IOException: JuiceFS initialized failed for jfs:///` with conf `kyuubi.operation.result.saveToFile.dir=jfs://datalake/tmp`. `hdfs://xxx:port/tmp` may encounter similar errors #### Behavior With This Pull Request 🎉 User Can use hdfs dir as `kyuubi.operation.result.saveToFile.dir` without error. #### Related Unit Tests Seems no test suites added in #5591 and #5986, I'll try to build a dist and test with our internal cluster. --- # Checklist 📝 - [x] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6444 from camper42/save-to-hdfs. Closes #6437 990f0a728 [camper42] [Kyuubi #6437] Fix Spark engine query result save to HDFS Authored-by: camper42 <camper.xlii@gmail.com> Signed-off-by: Cheng Pan <chengpan@apache.org> |
||
|---|---|---|
| .. | ||
| kyuubi-chat-engine | ||
| kyuubi-download | ||
| kyuubi-flink-sql-engine | ||
| kyuubi-hive-sql-engine | ||
| kyuubi-jdbc-engine | ||
| kyuubi-spark-sql-engine | ||
| kyuubi-trino-engine | ||