master
836 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
b31663f569 |
[KYUUBI #7163][SPARK] Check whether engine context stopped in engine terminating checker
### Why are the changes needed? To close #7163, in this PR, it checks whether engine context stopped in engine terminating checker. 1. Spark context stooped dut to OOM in `spark-listener-group-shared`, and call `tryOrStopSparkContext`. ``` 25/08/03 19:08:06 ERROR Utils: uncaught error in thread spark-listener-group-shared, stopping SparkContext java.lang.OutOfMemoryError: GC overhead limit exceeded 25/08/03 19:08:06 INFO OperationAuditLogger: operation=a7f134b9-373b-402d-a82b-2d42df568807 opType=ExecuteStatement state=INITIALIZED user=b_hrvst session=6a90d01c-7627-4ae6-a506-7ba826355489 ... 25/08/03 19:08:23 INFO SparkSQLSessionManager: Opening session for b_hrvst10.147.254.115 25/08/03 19:08:23 ERROR SparkTBinaryFrontendService: Error opening session: org.apache.kyuubi.KyuubiSQLException: Cannot call methods on a stopped SparkContext. This stopped SparkContext was created at: org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:951) org.apache.kyuubi.engine.spark.SparkSQLEngine$.createSpark(SparkSQLEngine.scala:337) org.apache.kyuubi.engine.spark.SparkSQLEngine$.main(SparkSQLEngine.scala:415) org.apache.kyuubi.engine.spark.SparkSQLEngine.main(SparkSQLEngine.scala) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) java.lang.reflect.Method.invoke(Method.java:498) org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:732) The currently active SparkContext was created at: org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:951) org.apache.kyuubi.engine.spark.SparkSQLEngine$.createSpark(SparkSQLEngine.scala:337) org.apache.kyuubi.engine.spark.SparkSQLEngine$.main(SparkSQLEngine.scala:415) org.apache.kyuubi.engine.spark.SparkSQLEngine.main(SparkSQLEngine.scala) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) java.lang.reflect.Method.invoke(Method.java:498) org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:732) at org.apache.kyuubi.KyuubiSQLException$.apply(KyuubiSQLException.scala:69) at org.apache.kyuubi.KyuubiSQLException$.apply(KyuubiSQLException.scala:73) ``` 2. The kyuubi engine stop after 12 hours. ``` 25/08/04 07:13:25 ERROR ZookeeperDiscoveryClient: Zookeeper client connection state changed to: LOST, but failed to reconnect in 3 seconds. Give up retry and stop gracefully . 25/08/04 07:13:25 INFO ClientCnxn: Session establishment complete on server zeus-slc-zk-3.vip.hadoop.ebay.com/10.147.141.240:2181, sessionid = 0x3939e22c983032e, negotiated timeout = 40000 25/08/04 07:13:25 INFO ConnectionStateManager: State change: RECONNECTED 25/08/04 07:13:25 INFO ZookeeperDiscoveryClient: Zookeeper client connection state changed to: RECONNECTED 25/08/04 07:13:25 INFO SparkSQLEngine: Service: [SparkTBinaryFrontend] is stopping. 25/08/04 07:13:25 INFO SparkTBinaryFrontendService: Service: [EngineServiceDiscovery] is stopping. 25/08/04 07:13:25 WARN EngineServiceDiscovery: The Zookeeper ensemble is LOST 25/08/04 07:13:25 INFO EngineServiceDiscovery: Service[EngineServiceDiscovery] is stopped. 25/08/04 07:13:25 INFO SparkTBinaryFrontendService: Service[SparkTBinaryFrontend] is stopped. 25/08/04 07:13:25 INFO SparkTBinaryFrontendService: SparkTBinaryFrontend has stopped 25/08/04 07:13:25 INFO SparkSQLEngine: Service: [SparkSQLBackendService] is stopping. 25/08/04 07:13:25 INFO SparkSQLBackendService: Service: [SparkSQLSessionManager] is stopping. 25/08/04 07:13:25 INFO SparkSQLSessionManager: Service: [SparkSQLOperationManager] is stopping. 25/08/04 07:13:45 INFO SparkSQLOperationManager: Service[SparkSQLOperationManager] is stopped. 25/08/04 07:13:45 INFO SparkSQLSessionManager: Service[SparkSQLSessionManager] is stopped. ``` 3. seem the shutdown hook does not work in such case |
||
|
|
9a0c49e791
|
[KYUUBI #7138] Respect kyuubi.session.engine.spark.initialize.sql set by cllient in shared engine mode
### Why are the changes needed? <img width="1860" height="908" alt="image" src="https://github.com/user-attachments/assets/ec445237-be62-405f-992e-56e10156407f" /> **Current Behavior:** When "kyuubi.engine.share.level = USER/GROUP/SERVER", the first client (Client A) calling openSession creates a Kyuubi-Spark-SQL-Engine (Spark Driver), where the initialization SQL configured in "kyuubi.session.engine.spark.initialize.sql" takes effect. Subsequent clients (e.g., Client B) connecting via openSession will reuse the existing Kyuubi-Spark-SQL-Engine (Spark Driver) created in step 1, where the initialization SQL configured in "kyuubi.session.engine.spark.initialize.sql" becomes ineffective. **Why This Capability Is Needed:** Currently, kyuubi.session.engine.spark.initialize.sql only applies to the first openSession client. All subsequent SQL operations inherit the initialization SQL configuration from the first client (this appears to be a potential bug). Client A may need to set "USE dbA" in its current SQL context, while Client B may need "USE dbB" in its own context - such scenarios should be supported. ### How was this patch tested? Tested on local Kyuubi/Spark cluster. No existing unit tests cover this scenario. Please point me to any relevant tests so I can add them ### Was this patch authored or co-authored using generative AI tooling? No Closes #7138 from 1358035421/lc/spark_session_init_sql. Closes #7138 338d8aace [Cheng Pan] remove dash 1beecc456 [Cheng Pan] fix 6c7f9a13e [liangzhaoyuan] update migration-guide.md 492adb6c4 [liangzhaoyuan] fix review comments f0e9320be [1358035421] Merge branch 'master' into lc/spark_session_init_sql 021455322 [liangzhaoyuan] update migration-guide.md b4e61cf89 [liangzhaoyuan] ut ca4c71253 [Cheng Pan] Update externals/kyuubi-spark-sql-engine/src/main/scala/org/apache/kyuubi/engine/spark/session/SparkSQLSessionManager.scala da92544f1 [liangzhaoyuan] fix c1a38d584 [liangzhaoyuan] Support executing kyuubi.session.engine.spark.initialize.sql on session initialization Lead-authored-by: liangzhaoyuan <lwlzyl19940916@gmail.com> Co-authored-by: Cheng Pan <chengpan@apache.org> Co-authored-by: 1358035421 <13588035421@163.com> Co-authored-by: Cheng Pan <pan3793@gmail.com> Signed-off-by: Cheng Pan <chengpan@apache.org> |
||
|
|
9a50bfa814
|
[KYUUBI #7158] Spark engine respects session-level idle timeout threshold
### Why are the changes needed? Fixes the same class of issue as https://github.com/apache/kyuubi/pull/7138 Previously, `sessionIdleTimeoutThreshold` was initialized only once during session creation using `sessionManager.getConf`, preventing dynamic updates when clients pass new configurations during connection. we now: - Allow clients to set session-specific kyuubi.session.idle.timeout` during connection - Dynamically adjust idle timeout per session - Prevent connection pile-up by timely recycling idle sessions Closes #7158 from 1358035421/lc/sessio_idle_timeout_threshold. Closes #7158 abe513eed [liangzhaoyuan] fix review comments 3face844a [liangzhaoyuan] Use per-session idle timeout threshold instead of global sessionManager's value Authored-by: liangzhaoyuan <lwlzyl19940916@gmail.com> Signed-off-by: Cheng Pan <chengpan@apache.org> |
||
|
|
e366b0950f
|
[KYUUBI #6920][FOLLOWUP] Spark SQL engine supports Spark 4.0
### Why are the changes needed? There were some breaking changes after we fixed compatibility for Spark 4.0.0 RC1 in #6920, but now Spark has reached 4.0.0 RC6, which has less chance to receive more breaking changes. ### How was this patch tested? Changes are extracted from https://github.com/apache/kyuubi/pull/6928, which passed CI with Spark 4.0.0 RC6 ### Was this patch authored or co-authored using generative AI tooling? No. Closes #7061 from pan3793/6920-followup. Closes #6920 17a1bd9e5 [Cheng Pan] [KYUUBI #6920][FOLLOWUP] Spark SQL engine supports Spark 4.0 Authored-by: Cheng Pan <chengpan@apache.org> Signed-off-by: Cheng Pan <chengpan@apache.org> |
||
|
|
a54ee39ab3 |
[KYUUBI #6984] Fix ValueError when rendering MapType data
[ [KYUUBI #6984] Fix ValueError when rendering MapType data ](https://github.com/apache/kyuubi/issues/6984) ### Why are the changes needed? The issue was caused by an incorrect iteration of MapType data in the `%table` magic command. When iterating over a `MapType` column, the code used `for k, v in m` directly, which leads to a `ValueError` because raw `Map` entries may not be properly unpacked ### How was this patch tested? - [x] Manual testing: Executed a query with a `MapType` column and confirmed that the `%table` command now renders it without errors. ```python from pyspark.sql import SparkSession from pyspark.sql.types import MapType, StringType, IntegerType spark = SparkSession.builder \ .appName("MapFieldExample") \ .getOrCreate() data = [ (1, {"a": "1", "b": "2"}), (2, {"x": "10"}), (3, {"key": "value"}) ] schema = "id INT, map_col MAP<STRING, STRING>" df = spark.createDataFrame(data, schema=schema) df.printSchema() df2=df.collect() ``` using `%table` render table ```python %table df2 ``` result ```python {'application/vnd.livy.table.v1+json': {'headers': [{'name': 'id', 'type': 'INT_TYPE'}, {'name': 'map_col', 'type': 'MAP_TYPE'}], 'data': [[1, {'a': '1', 'b': '2'}], [2, {'x': '10'}], [3, {'key': 'value'}]]}} ``` ### Was this patch authored or co-authored using generative AI tooling? No **notice** This PR was co-authored by DeepSeek-R1. Closes #6985 from JustFeng/patch-1. Closes #6984 e0911ba94 [Reese Feng] Update PySparkTests for magic cmd bc3ce1a49 [Reese Feng] Update PySparkTests for magic cmd 200d7ad9b [Reese Feng] Fix syntax error in dict iteration in magic_table_convert_map Authored-by: Reese Feng <10377945+JustFeng@users.noreply.github.com> Signed-off-by: Wang, Fei <fwang12@ebay.com> |
||
|
|
cc9e11ce59
|
[KYUUBI #6920] Spark SQL engine supports Spark 4.0
### Why are the changes needed? Spark 4.0 continues to receive breaking changes since 4.0.0-preview2, and the 4.0.0 RC1 is scheduled at 20250215, this PR fixes all compatibility for the latest Spark 4.0.0-SNAPSHOT for Spark SQL engine. ### How was this patch tested? Pass GHA with `spark-master` ### Was this patch authored or co-authored using generative AI tooling? No. Closes #6920 from pan3793/spark4. Closes #6920 170430e5e [Cheng Pan] Revert "ci" c6d889350 [Cheng Pan] fix 86ff7ea2e [Cheng Pan] fix 75d0bf563 [Cheng Pan] ci 9d88c8630 [Cheng Pan] fix spark 4.0 compatibility Authored-by: Cheng Pan <chengpan@apache.org> Signed-off-by: Cheng Pan <chengpan@apache.org> |
||
|
|
d49c6314d0
|
[KYUUBI #6915] Fix ClickHouse integration tests
### Why are the changes needed? I observed ClickHouse integration test failure in GHA, after some investigation, the root cause is https://github.com/testcontainers/testcontainers-java/pull/9942 ``` /entrypoint.sh: neither CLICKHOUSE_USER nor CLICKHOUSE_PASSWORD is set, disabling network access for user 'default' ``` In short, the recent ClickHouse docker image does not allow the `default` user to connect without a password, unfortunately, `testcontainers-scala-clickhosue` does not expose API to set CLICKHOSUE_USER and CLICKHOUSE_PASSWORD, as a workaround, I pin `clickhouse-server:24.3.15`(the latest version has no such restriction) until a fixed version of Testcontainers available. This PR also switches the `clickhouse-jdbc`'s classifier from `http` to `shaded`, the reason is, `http` does not ship ApacheHttpClient5, previously, it happened to work because `iceberg-runtime-spark3.5_2.12` packaged un-relocated ApacheHttpClient5 classes, but it gets fixed in Iceberg 1.8.0, then `clickhouse-jdbc:http` stop working. ``` java.lang.NoClassDefFoundError: org/apache/hc/core5/http/HttpRequest ``` Additionally, this PR bumps `clickhouse-jdbc` from 0.6.0 to 0.6.5. ### How was this patch tested? Pass GHA. ### Was this patch authored or co-authored using generative AI tooling? No. Closes #6915 from pan3793/fix-ch-test. Closes #6915 996f095e0 [Cheng Pan] Pin clickhouse-server:24.3.15 d633df07c [Cheng Pan] Bump clickhouse-jdbc 0.6.5 214c8a227 [Cheng Pan] Fix ClickHouse integration tests Authored-by: Cheng Pan <chengpan@apache.org> Signed-off-by: Cheng Pan <chengpan@apache.org> |
||
|
|
622190197d |
[KYUUBI #6843] [FOLLOWUP] Fix 'query-timeout-thread' thread leak
### Why are the changes needed? If the session manager's ThreadPoolExecutor refuses to execute the asyncOperation, then we need to shut down the query-timeout-thread in the catch block. This should also be done in JDBC and the CHAT engine. ### How was this patch tested? ### Was this patch authored or co-authored using generative AI tooling? Closes #6873 from lsm1/branch-followup-6843. Closes #6843 aed9088c8 [senmiaoliu] fix query timeout checker leak in chat engine and jdbc engine Authored-by: senmiaoliu <senmiaoliu@trip.com> Signed-off-by: senmiaoliu <senmiaoliu@trip.com> |
||
|
|
a051253774
|
[KYUUBI #6843] Fix 'query-timeout-thread' thread leak
### Why are the changes needed? see https://github.com/apache/kyuubi/issues/6843 If the session manager's ThreadPoolExecutor refuses to execute asyncOperation, then we need to shut down the query-timeout-thread in the catch ### How was this patch tested? 1 Use jstack to view threads on the long-lived engine side  2 Wait for all SQL statements in the engine to finish executing, and then use stack to check the number of query-timeout-thread threads, which should be empty.  ### Was this patch authored or co-authored using generative AI tooling? NO Closes #6844 from ASiegeLion/master. Closes #6843 9107a300e [liupeiyue] [KYUUBI #6843] FIX 'query-timeout-thread' thread leak 4b3417f21 [liupeiyue] [KYUUBI #6843] FIX 'query-timeout-thread' thread leak ef1f66bb5 [liupeiyue] [KYUUBI #6843] FIX 'query-timeout-thread' thread leak 9e1a015f6 [liupeiyue] [KYUUBI #6843] FIX 'query-timeout-thread' thread leak 78a9fde09 [liupeiyue] [KYUUBI #6843] FIX 'query-timeout-thread' thread leak Authored-by: liupeiyue <liupeiyue@yy.com> Signed-off-by: Cheng Pan <chengpan@apache.org> |
||
|
|
f844afa51b
|
[KYUUBI #6859] Exclude log4j12 from hive engine module classpath
### Why are the changes needed? Kyuubi uses log4j2 as the logging framework, while I found that the Hive SQL engine module still polls log4j 1.2 to the classpath unexpectedly, we should exclude it to avoid potential issues. ``` build/mvn dependency:tree -pl :kyuubi-hive-sql-engine_2.12 ``` ``` ... [INFO] +- org.apache.hive:hive-service:jar:3.1.3:provided [INFO] | +- org.apache.hive:hive-exec:jar:3.1.3:provided [INFO] | | +- org.apache.zookeeper:zookeeper:jar:3.4.6:provided [INFO] | | | +- log4j:log4j:jar:1.2.16:provided ... ``` ### How was this patch tested? Checks `build/mvn dependency:tree | grep 'log4j:log4j:jar:1.2'` returns nothing and pass GHA. ### Was this patch authored or co-authored using generative AI tooling? No. Closes #6859 from pan3793/exclude-log4j1. Closes #6859 287cf78af [Cheng Pan] Exclude log4j12 from hive engine module classpath Authored-by: Cheng Pan <chengpan@apache.org> Signed-off-by: Cheng Pan <chengpan@apache.org> |
||
|
|
62c06ad019
|
[KYUUBI #6726][FOLLOWUP] Fix compilation on scala-2.13
# Description Currently, Kyuubi supports Trino progress in scala-2.12, but got some error in scala-2.13. * Make Trino progress available in scala-2.13 ## Types of changes 🔖 - [x] Bugfix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 --- # Checklist 📝 - [x] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6841 from naive-zhang/trino-scala13. Closes #6726 0693d281a [Cheng Pan] Update externals/kyuubi-trino-engine/pom.xml c20c49a3e [native-zhang] modify scala code style 6cfe4453e [native-zhang] unify scala grammar c0cd92e47 [native-zhang] change total stage num back into 3 in TrinoOperationProgressSuite 852398ddc [naive-zhang] Merge branch 'apache:master' into trino-scala13 6a038ec2b [native-zhang] move trino progress monitor available in profile scala-2.13 f434a2142 [native-zhang] move trino progress monitor into scala-2.12 impl and change state num in some test case 8d291f513 [native-zhang] add scala version diff in pom for trino module Lead-authored-by: native-zhang <xinsen.zhang.0571@gmail.com> Co-authored-by: Cheng Pan <pan3793@gmail.com> Co-authored-by: naive-zhang <xinsen.zhang.0571@gmail.com> Signed-off-by: Cheng Pan <chengpan@apache.org> |
||
|
|
eb1b5996c9
|
[KYUUBI #6815] JDBC Engine supports Oracle
# Description Currently, Kyuubi supports JDBC engines with limited dialects, and I extend the dialects to support Oracle. * Introduce Oracle support in JDBC Engine * Adding dialects and tests for Oracle ## Types of changes 🔖 - [ ] Bugfix (non-breaking change which fixes an issue) - [x] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 Add tests of `OperationWithOracleEngineSuite`, `OracleOperationSuite`, `OracleSessionSuite` and `OracleStatementSuite`. --- # Checklist 📝 - [x] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6815 from naive-zhang/jdbc-oracle. Closes #6815 0ffad5b6b [native-zhang] add some brief comments on the caller side for the implementation of Oracle JDBC engine 6f469a135 [naive-zhang] Merge branch 'apache:master' into jdbc-oracle ae70710e6 [Cheng Pan] Update externals/kyuubi-jdbc-engine/src/main/scala/org/apache/kyuubi/engine/jdbc/dialect/OracleSQLDialect.scala 171d06b9e [native-zhang] use another implementation of transform decimal into int, in engine instead of KyuubiBaseResultSet 7cb74d28e [naive-zhang] Merge branch 'apache:master' into jdbc-oracle ccd7cae8b [naive-zhang] remove redundant override methods in OracleSQLDialect.scala a7da4a646 [naive-zhang] remove redundant impl of getTableTypesOperation in OracleSQLDialect.scala 70b49fcba [naive-zhang] Use the single line string if SQL fits in one line, otherwise write it in a pretty style e58348460 [naive-zhang] Update externals/kyuubi-jdbc-engine/src/main/scala/org/apache/kyuubi/engine/jdbc/dialect/OracleSQLDialect.scala b33e97a08 [naive-zhang] remove redundant testcontainers-scala-oracle-xe dependency in pom.xml 4c967b98e [naive-zhang] use gvenzl/oracle-free:23.5-slim with docker-compose for test case 0215e6d49 [naive-zhang] Merge branch 'apache:master' into jdbc-oracle d688b4706 [naive-zhang] change oracle image into gvenzl/oracle-free:23.5-slim abf983727 [naive-zhang] fix code style checking error in KyuubiConf.scala d1e82edb1 [naive-zhang] fix code style checking error in settings.md aa2e2e9ba [naive-zhang] adjust wired space in OracleSQLDialect b43cea421 [naive-zhang] add oracle configuration for kyuubi.engine.jdbc.connection.provider 397c1cfec [naive-zhang] Merge branch 'apache:master' into jdbc-oracle 2f1b5ed0b [naive-zhang] add jdbc support for Oracle Lead-authored-by: naive-zhang <xinsen.zhang.0571@gmail.com> Co-authored-by: native-zhang <xinsen.zhang.0571@gmail.com> Co-authored-by: Cheng Pan <pan3793@gmail.com> Signed-off-by: Cheng Pan <chengpan@apache.org> |
||
|
|
160bf58042
|
[KYUUBI #6726] Support trino stage progress
# 🔍 Description ## Issue References 🔗 This pull request fixes https://github.com/apache/kyuubi/issues/6726 ## Describe Your Solution 🔧 Add trino statement progress ## Types of changes 🔖 - [ ] Bugfix (non-breaking change which fixes an issue) - [x] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 #### Behavior Without This Pull Request ⚰️ #### Behavior With This Pull Request 🎉 #### Related Unit Tests --- # Checklist 📝 - [x] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6759 from taylor12805/trino-progress. Closes #6726 6646c9511 [taylor.fan] [KYUUBI #6726] update test case result d84904e82 [taylor.fan] [KYUUBI #6726] reformat code 2b1c776e1 [taylor.fan] [KYUUBI #6726] reformat code f635b38de [taylor.fan] [KYUUBI #6726] add test case 7c29ba6f3 [taylor.fan] [KYUUBI #6726] Support trino stage progress Authored-by: taylor.fan <taylor.fan@vipshop.com> Signed-off-by: Kent Yao <yao@apache.org> |
||
|
|
d3520ddbce |
[KYUUBI #6769] [RELEASE] Bump 1.11.0-SNAPSHOT
# 🔍 Description ## Issue References 🔗 This pull request fixes # ## Describe Your Solution 🔧 Preparing v1.11.0-SNAPSHOT after branch-1.10 cut ```shell build/mvn versions:set -DgenerateBackupPoms=false -DnewVersion="1.11.0-SNAPSHOT" (cd kyuubi-server/web-ui && npm version "1.11.0-SNAPSHOT") ``` ## Types of changes 🔖 - [ ] Bugfix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 #### Behavior Without This Pull Request ⚰️ #### Behavior With This Pull Request 🎉 #### Related Unit Tests --- # Checklist 📝 - [ ] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6769 from bowenliang123/bump-1.11. Closes #6769 6db219d28 [Bowen Liang] get latest_branch by sorting version in branch name 465276204 [Bowen Liang] update package.json 81f2865e5 [Bowen Liang] bump Authored-by: Bowen Liang <liangbowen@gf.com.cn> Signed-off-by: Bowen Liang <liangbowen@gf.com.cn> |
||
|
|
1e9d68b000 |
[KYUUBI #6368] Flink engine supports user impersonation
# 🔍 Description ## Issue References 🔗 This pull request fixes #6368 ## Describe Your Solution 🔧 Support impersonation mode for flink sql engine. ## Types of changes 🔖 - [ ] Bugfix (non-breaking change which fixes an issue) - [X] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 #### Behavior Without This Pull Request ⚰️ #### Behavior With This Pull Request 🎉 Test in hadoop-testing env. Connection: ``` beeline -u "jdbc:hive2://hadoop-master1.orb.local:10009/default;hive.server2.proxy.user=spark;principal=kyuubi/_HOSTTEST.ORG?kyuubi.engine.type=FLINK_SQL;flink.execution.target=yarn-application;kyuubi.engine.share.level=CONNECTION;kyuubi.engine.flink.doAs.enabled=true;" ``` sql: ``` select 1; ``` result:  launch engine command: ``` 2024-06-12 03:22:10.242 INFO KyuubiSessionManager-exec-pool: Thread-62 org.apache.kyuubi.engine.EngineRef: Launching engine: /opt/flink-1.18.1/bin/flink run-application \ -t yarn-application \ -Dyarn.ship-files=/opt/flink/opt/flink-sql-client-1.18.1.jar;/opt/flink/opt/flink-sql-gateway-1.18.1.jar;/etc/hive/conf/hive-site.xml \ -Dyarn.application.name=kyuubi_CONNECTION_FLINK_SQL_spark_6170b9aa-c690-4b50-938f-d59cca9aa2d6 \ -Dyarn.tags=KYUUBI,6170b9aa-c690-4b50-938f-d59cca9aa2d6 \ -Dcontainerized.master.env.FLINK_CONF_DIR=. \ -Dcontainerized.master.env.HIVE_CONF_DIR=. \ -Dyarn.security.appmaster.delegation.token.services=kyuubi \ -Dsecurity.delegation.token.provider.HiveServer2.enabled=false \ -Dsecurity.delegation.token.provider.hbase.enabled=false \ -Dexecution.target=yarn-application \ -Dsecurity.module.factory.classes=org.apache.flink.runtime.security.modules.JaasModuleFactory;org.apache.flink.runtime.security.modules.ZookeeperModuleFa ctory \ -Dsecurity.delegation.token.provider.hadoopfs.enabled=false \ -c org.apache.kyuubi.engine.flink.FlinkSQLEngine /opt/apache-kyuubi-1.10.0-SNAPSHOT-bin/externals/engines/flink/kyuubi-flink-sql-engine_2.12-1.10.0-SNAPS HOT.jar \ --conf kyuubi.session.user=spark \ --conf kyuubi.client.ipAddress=172.20.0.5 \ --conf kyuubi.engine.credentials=SERUUwACJnRocmlmdDovL2hhZG9vcC1tYXN0ZXIxLm9yYi5sb2NhbDo5MDgzRQAFc3BhcmsEaGl2ZShreXV1YmkvaGFkb29wLW1hc3RlcjEub3JiLmxvY2Fs QFRFU1QuT1JHigGQCneevIoBkC6EIrwWDxSg03pnAB8dA295wh+Dim7Fx4FNxhVISVZFX0RFTEVHQVRJT05fVE9LRU4ADzE3Mi4yMC4wLjU6ODAyMEEABXNwYXJrAChreXV1YmkvaGFkb29wLW1hc3RlcjEub3JiL mxvY2FsQFRFU1QuT1JHigGQCneekIoBkC6EIpBHHBSket0SQnlXT5EIMN0U2fUKFRIVvBVIREZTX0RFTEVHQVRJT05fVE9LRU4PMTcyLjIwLjAuNTo4MDIwAA== \ --conf kyuubi.engine.flink.doAs.enabled=true \ --conf kyuubi.engine.hive.extra.classpath=/opt/hadoop/share/hadoop/client/*:/opt/hadoop/share/hadoop/mapreduce/* \ --conf kyuubi.engine.share.level=CONNECTION \ --conf kyuubi.engine.submit.time=1718162530017 \ --conf kyuubi.engine.type=FLINK_SQL \ --conf kyuubi.frontend.protocols=THRIFT_BINARY,REST \ --conf kyuubi.ha.addresses=hadoop-master1.orb.local:2181 \ --conf kyuubi.ha.engine.ref.id=6170b9aa-c690-4b50-938f-d59cca9aa2d6 \ --conf kyuubi.ha.namespace=/kyuubi_1.10.0-SNAPSHOT_CONNECTION_FLINK_SQL/spark/6170b9aa-c690-4b50-938f-d59cca9aa2d6 \ --conf kyuubi.server.ipAddress=172.20.0.5 \ --conf kyuubi.session.connection.url=hadoop-master1.orb.local:10009 \ --conf kyuubi.session.engine.startup.waitCompletion=false \ --conf kyuubi.session.real.user=spark ``` launch engine log:  jobmanager job: ``` 2024-06-12 03:22:26,400 INFO org.apache.flink.runtime.security.token.DefaultDelegationTokenManager [] - Loading delegation token providers 2024-06-12 03:22:26,992 INFO org.apache.kyuubi.engine.flink.security.token.KyuubiDelegationTokenProvider [] - Renew delegation token with engine credentials: SERUUwACJnRocmlmdDovL2hhZG9vcC1tYXN0ZXIxLm9yYi5sb2NhbDo5MDgzRQAFc3BhcmsEaGl2ZShreXV1YmkvaGFkb29wLW1hc3RlcjEub3JiLmxvY2FsQFRFU1QuT1JHigGQCneevIoBkC6EIrwWDxSg03pnAB8dA295wh+Dim7Fx4FNxhVISVZFX0RFTEVHQVRJT05fVE9LRU4ADzE3Mi4yMC4wLjU6ODAyMEEABXNwYXJrAChreXV1YmkvaGFkb29wLW1hc3RlcjEub3JiLmxvY2FsQFRFU1QuT1JHigGQCneekIoBkC6EIpBHHBSket0SQnlXT5EIMN0U2fUKFRIVvBVIREZTX0RFTEVHQVRJT05fVE9LRU4PMTcyLjIwLjAuNTo4MDIwAA== 2024-06-12 03:22:27,100 INFO org.apache.kyuubi.engine.flink.FlinkEngineUtils [] - Add new unknown token Kind: HIVE_DELEGATION_TOKEN, Service: , Ident: 00 05 73 70 61 72 6b 04 68 69 76 65 28 6b 79 75 75 62 69 2f 68 61 64 6f 6f 70 2d 6d 61 73 74 65 72 31 2e 6f 72 62 2e 6c 6f 63 61 6c 40 54 45 53 54 2e 4f 52 47 8a 01 90 0a 77 9e bc 8a 01 90 2e 84 22 bc 16 0f 2024-06-12 03:22:27,104 WARN org.apache.kyuubi.engine.flink.FlinkEngineUtils [] - Ignore token with earlier issue date: Kind: HDFS_DELEGATION_TOKEN, Service: 172.20.0.5:8020, Ident: (token for spark: HDFS_DELEGATION_TOKEN owner=spark, renewer=, realUser=kyuubi/hadoop-master1.orb.localTEST.ORG, issueDate=1718162529936, maxDate=1718767329936, sequenceNumber=71, masterKeyId=28) 2024-06-12 03:22:27,104 INFO org.apache.kyuubi.engine.flink.FlinkEngineUtils [] - Update delegation tokens. The number of tokens sent by the server is 2. The actual number of updated tokens is 1. ...... 4-06-12 03:22:29,414 INFO org.apache.flink.runtime.security.token.DefaultDelegationTokenManager [] - Starting tokens update task 2024-06-12 03:22:29,415 INFO org.apache.flink.runtime.security.token.DelegationTokenReceiverRepository [] - New delegation tokens arrived, sending them to receivers 2024-06-12 03:22:29,422 INFO org.apache.kyuubi.engine.flink.security.token.KyuubiDelegationTokenReceiver [] - Updating delegation tokens for current user 2024-06-12 03:22:29,422 INFO org.apache.kyuubi.engine.flink.security.token.KyuubiDelegationTokenReceiver [] - Token Service: Identifier:[10, 13, 10, 9, 8, 10, 16, -78, -36, -49, -17, -5, 49, 16, 1, 16, -100, -112, -60, -127, -8, -1, -1, -1, -1, 1] 2024-06-12 03:22:29,422 INFO org.apache.kyuubi.engine.flink.security.token.KyuubiDelegationTokenReceiver [] - Token Service: Identifier:[0, 5, 115, 112, 97, 114, 107, 4, 104, 105, 118, 101, 40, 107, 121, 117, 117, 98, 105, 47, 104, 97, 100, 111, 111, 112, 45, 109, 97, 115, 116, 101, 114, 49, 46, 111, 114, 98, 46, 108, 111, 99, 97, 108, 64, 84, 69, 83, 84, 46, 79, 82, 71, -118, 1, -112, 10, 119, -98, -68, -118, 1, -112, 46, -124, 34, -68, 22, 15] 2024-06-12 03:22:29,422 INFO org.apache.kyuubi.engine.flink.security.token.KyuubiDelegationTokenReceiver [] - Token Service:172.20.0.5:8020 Identifier:[0, 5, 115, 112, 97, 114, 107, 0, 40, 107, 121, 117, 117, 98, 105, 47, 104, 97, 100, 111, 111, 112, 45, 109, 97, 115, 116, 101, 114, 49, 46, 111, 114, 98, 46, 108, 111, 99, 97, 108, 64, 84, 69, 83, 84, 46, 79, 82, 71, -118, 1, -112, 10, 119, -98, -112, -118, 1, -112, 46, -124, 34, -112, 71, 28] 2024-06-12 03:22:29,422 INFO org.apache.kyuubi.engine.flink.security.token.KyuubiDelegationTokenReceiver [] - Updated delegation tokens for current user successfully ``` taskmanager log: ``` 2024-06-12 03:45:06,622 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor [] - Receive initial delegation tokens from resource manager 2024-06-12 03:45:06,627 INFO org.apache.flink.runtime.security.token.DelegationTokenReceiverRepository [] - New delegation tokens arrived, sending them to receivers 2024-06-12 03:45:06,628 INFO org.apache.kyuubi.engine.flink.security.token.KyuubiDelegationTokenReceiver [] - Updating delegation tokens for current user 2024-06-12 03:45:06,629 INFO org.apache.kyuubi.engine.flink.security.token.KyuubiDelegationTokenReceiver [] - Token Service: Identifier:[10, 13, 10, 9, 8, 10, 16, -78, -36, -49, -17, -5, 49, 16, 1, 16, -100, -112, -60, -127, -8, -1, -1, -1, -1, 1] 2024-06-12 03:45:06,630 INFO org.apache.kyuubi.engine.flink.security.token.KyuubiDelegationTokenReceiver [] - Token Service: Identifier:[0, 5, 115, 112, 97, 114, 107, 4, 104, 105, 118, 101, 40, 107, 121, 117, 117, 98, 105, 47, 104, 97, 100, 111, 111, 112, 45, 109, 97, 115, 116, 101, 114, 49, 46, 111, 114, 98, 46, 108, 111, 99, 97, 108, 64, 84, 69, 83, 84, 46, 79, 82, 71, -118, 1, -112, 10, 119, -98, -68, -118, 1, -112, 46, -124, 34, -68, 22, 15] 2024-06-12 03:45:06,630 INFO org.apache.kyuubi.engine.flink.security.token.KyuubiDelegationTokenReceiver [] - Token Service:172.20.0.5:8020 Identifier:[0, 5, 115, 112, 97, 114, 107, 0, 40, 107, 121, 117, 117, 98, 105, 47, 104, 97, 100, 111, 111, 112, 45, 109, 97, 115, 116, 101, 114, 49, 46, 111, 114, 98, 46, 108, 111, 99, 97, 108, 64, 84, 69, 83, 84, 46, 79, 82, 71, -118, 1, -112, 10, 119, -98, -112, -118, 1, -112, 46, -124, 34, -112, 71, 28] 2024-06-12 03:45:06,636 INFO org.apache.kyuubi.engine.flink.security.token.KyuubiDelegationTokenReceiver [] - Updated delegation tokens for current user successfully 2024-06-12 03:45:06,636 INFO org.apache.flink.runtime.security.token.DelegationTokenReceiverRepository [] - Delegation tokens sent to receivers ``` #### Related Unit Tests --- # Checklist 📝 - [X] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6383 from wForget/KYUUBI-6368. Closes #6368 47df43ef0 [wforget] remove doAsEnabled 984b96c74 [wforget] update settings.md c7f8d474e [wforget] make generateTokenFile conf to internal 8632176b1 [wforget] address comments 2ec270e8a [wforget] licenses ed0e22f4e [wforget] separate kyuubi-flink-token-provider module b66b855b6 [wforget] address comment d4fc2bd1d [wforget] fix 1a3dc4643 [wforget] fix style 825e2a7a0 [wforget] address comments a679ba1c2 [wforget] revert remove renewer cdd499b95 [wforget] fix and comment 19caec6c0 [wforget] pass token to submit process b2991d419 [wforget] fix 7c3bdde1b [wforget] remove security.delegation.tokens.enabled check 8987c9176 [wforget] fix 5bd8cfe7c [wforget] fix 08992642d [wforget] Implement KyuubiDelegationToken Provider/Receiver fa16d7def [wforget] enable delegation token manager e50db7497 [wforget] [KYUUBI #6368] Support impersonation mode for flink sql engine Authored-by: wforget <643348094@qq.com> Signed-off-by: Bowen Liang <liangbowen@gf.com.cn> |
||
|
|
da2401c171 |
[KYUUBI #6688] [SPARK] Avoid trigger execution when getting result schema
# 🔍 Description ## Issue References 🔗 `DataFrame.isEmpty` may trigger execution again, we should avoid it. ## Describe Your Solution 🔧 ## Types of changes 🔖 - [X] Bugfix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 #### Behavior Without This Pull Request ⚰️ #### Behavior With This Pull Request 🎉 #### Related Unit Tests --- # Checklist 📝 - [X] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6688 from wForget/planonly_schema. Closes #6688 265f0ec26 [wforget] fix style d71cc4aa9 [wforget] refactor resultSchema for spark operation 0c36b3d25 [wforget] Avoid trigger execution when getting result schema Authored-by: wforget <643348094@qq.com> Signed-off-by: Bowen Liang <liangbowen@gf.com.cn> |
||
|
|
1bfc8c5840
|
[KYUUBI #6699] Bump Spark 4.0.0-preview2
# 🔍 Description Spark 4.0.0-preview2 RC1 passed the vote https://lists.apache.org/thread/4ctj2mlgs4q2yb4hdw2jy4z34p5yw2b1 ## Types of changes 🔖 - [ ] Bugfix (non-breaking change which fixes an issue) - [x] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 Pass GHA. --- # Checklist 📝 - [x] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6699 from pan3793/spark-4.0.0-preview2. Closes #6699 2db1f645d [Cheng Pan] 4.0.0-preview2 42055bb1e [Cheng Pan] fix d29c0ef83 [Cheng Pan] disable delta test 98d323b95 [Cheng Pan] fix 2e782c00b [Cheng Pan] log4j-slf4j2-impl fde4bb6ba [Cheng Pan] spark-4.0.0-preview2 Authored-by: Cheng Pan <chengpan@apache.org> Signed-off-by: Cheng Pan <chengpan@apache.org> |
||
|
|
8056235ec1 |
[KYUUBI #6696] Fix Trino Status Printer to Prevent Thread Leak
# 🔍 Description ## Issue References 🔗 ## Describe Your Solution 🔧 - use `newDaemonSingleThreadScheduledExecutor` avoid `timer` thread leak - reduce same status info out ## Types of changes 🔖 - [x] Bugfix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 #### Behavior Without This Pull Request ⚰️ #### Behavior With This Pull Request 🎉 #### Related Unit Tests --- # Checklist 📝 - [ ] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6696 from lsm1/branch-fix-trino-printer. Closes #6696 01f917cb7 [senmiaoliu] fix style 0d20fd1f9 [senmiaoliu] fix trino info printer thread leak Authored-by: senmiaoliu <senmiaoliu@trip.com> Signed-off-by: senmiaoliu <senmiaoliu@trip.com> |
||
|
|
edbe3f3fef |
[KYUUBI #6681] Log the delete batch request in batch operation log
# 🔍 Description ## Issue References 🔗 As title, log the delete batch request in operation log. ## Describe Your Solution 🔧 Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change. ## Types of changes 🔖 - [ ] Bugfix (non-breaking change which fixes an issue) - [x] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 #### Behavior Without This Pull Request ⚰️ #### Behavior With This Pull Request 🎉 #### Related Unit Tests --- # Checklist 📝 - [x] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6681 from turboFei/audit_kill. Closes #6681 8550868a6 [Wang, Fei] withOperationLog Authored-by: Wang, Fei <fwang12@ebay.com> Signed-off-by: Wang, Fei <fwang12@ebay.com> |
||
|
|
db57e9365d |
[KYUUBI #6587] Periodically expire temp files and operation logs on server to avoid memeory leak by Files.deleteOnExit
# 🔍 Description ## Issue References 🔗 - ## Describe Your Solution 🔧 Fix the memory leak on server caused by `Files.deleteOnExit`. For long-running Kyuubi server instances, some operation log files and batch job upload files are marked for deletion at exit using `Files.deleteOnExit`. However, the `files` list within the `DeleteOnExitHook` by `Files.deleteOnExit` method continuously accumulates file paths without being cleaned up, leading to a memory leak issue. This PR fix this issue by: 1. introduce a new util `FileExpirationUtils` for similar use of `Files.deleteOnExit`, with exposed method for evict file path from the list to prevent accumulative path list 2. adding a service `TempFileService ` in server module, periodical clean-up the files for operation logging path, uploaded resources and etc. And it evict the paths in `TempFileCleanupUtils` instance after cleanup. 3. add the new config `kyuubi.server.tempFile.expireTime` with a default value of 7 days, to control How often to trigger a file expiration clean-up for stale files ## Types of changes 🔖 - [ ] Bugfix (non-breaking change which fixes an issue) - [x] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 #### Behavior Without This Pull Request ⚰️ #### Behavior With This Pull Request 🎉 #### Related Unit Tests --- # Checklist 📝 - [ ] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6587 from bowenliang123/file-expiration. Closes #6587 e23b72e08 [liangbowen] change to P14D acaf370e7 [liangbowen] change config name to kyuubi.server.tempFile.expireTime 6c7ddd527 [liangbowen] import ed1e4d76f [liangbowen] comment: ConcurrentHashMap.newKeySet fbf73ccb4 [liangbowen] update 34d3fc71c [liangbowen] add guava to common module's dep 49c10e5ef [Bowen Liang] file expiration Lead-authored-by: Bowen Liang <liangbowen@gf.com.cn> Co-authored-by: liangbowen <liangbowen@gf.com.cn> Co-authored-by: Bowen Liang <liangbowen@gf.com.cn> Signed-off-by: liangbowen <liangbowen@gf.com.cn> |
||
|
|
705bb2ae2c |
[KYUUBI #6583] Support to cancel Spark python operation
# 🔍 Description ## Issue References 🔗 This pull request fixes #6583 ## Background and Goals Currently, kyuubi cannot perform operation level interrupts when executing Python code. When it is necessary to cancel an operation that has been running for a long time, the entire session needs to be interrupted, and the execution context will be lost, which is very unfriendly to users. Therefore, it is necessary to support operation level interrupts so that the execution context is not lost when the user terminal operates. ## Describe Your Solution 🔧 Refer to the implementation of Jupyter Notebook and let the Python process listen to Signel SIGINT semaphore, when receiving a signel When SIGINT, interrupt the current executing code and capture KeyboardInterrupt to treat it as cancelled ## Types of changes 🔖 - [ ] Bugfix (non-breaking change which fixes an issue) - [x] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 #### Behavior Without This Pull Request ⚰️ #### Behavior With This Pull Request 🎉 #### Related Unit Tests --- # Checklist 📝 - [x] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6612 from yoock/features/support-operation-cancel. Closes #6583 bf6334d8c [Wang, Fei] log error to do not break the cleanup process ae7ad3f3c [Wang, Fei] comments 509627e65 [王龙] PySpark support operation cancel Lead-authored-by: Wang, Fei <fwang12@ebay.com> Co-authored-by: 王龙 <wanglong16@xiaomi.com> Signed-off-by: Wang, Fei <fwang12@ebay.com> |
||
|
|
afc21d3928
|
[KYUUBI #6598] Flink engine module supports building with Scala 2.13
# 🔍 Description This PR makes `kyuubi-flink-sql-engine` compile success with Scala 2.13. Note: As of Flink 1.20, it does not support Scala 2.13, so won't expect the Flink engine to work with Scala 2.13 for now. It would be helpful in the future after Flink removes Scala dependencies(planed in 2.0) then we can use any version of Scala. ## Types of changes 🔖 - [ ] Bugfix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 ``` $ build/mvn clean install -DskipTests -Pscala-2.13 ... [INFO] Reactor Summary for Kyuubi Project Parent 1.10.0-SNAPSHOT: [INFO] [INFO] Kyuubi Project Parent .............................. SUCCESS [ 9.031 s] [INFO] Kyuubi Project Util ................................ SUCCESS [ 3.998 s] [INFO] Kyuubi Project Util Scala .......................... SUCCESS [ 8.579 s] [INFO] Kyuubi Project Common .............................. SUCCESS [ 26.006 s] [INFO] Kyuubi Project Embedded Zookeeper .................. SUCCESS [ 6.573 s] [INFO] Kyuubi Project High Availability ................... SUCCESS [ 12.360 s] [INFO] Kyuubi Project Rest Client ......................... SUCCESS [ 5.799 s] [INFO] Kyuubi Project Control ............................. SUCCESS [ 12.345 s] [INFO] Kyuubi Project Events .............................. SUCCESS [ 11.447 s] [INFO] Kyuubi Dev Spark Lineage Extension ................. SUCCESS [ 13.327 s] [INFO] Kyuubi Project Metrics ............................. SUCCESS [ 7.326 s] [INFO] Kyuubi Project Hive JDBC Client .................... SUCCESS [ 7.492 s] [INFO] Kyuubi Project Server Plugin ....................... SUCCESS [ 1.428 s] [INFO] Kyuubi Project Download Externals .................. SUCCESS [ 7.132 s] [INFO] Kyuubi Project Engine Spark SQL .................... SUCCESS [01:11 min] [INFO] Kyuubi Project Server .............................. SUCCESS [ 35.930 s] [INFO] Kyuubi Project Hive Beeline ........................ SUCCESS [ 6.833 s] [INFO] Kyuubi Spark Connector Common ...................... SUCCESS [ 10.399 s] [INFO] Kyuubi Spark TPC-DS Connector ...................... SUCCESS [ 13.854 s] [INFO] Kyuubi Spark TPC-H Connector ....................... SUCCESS [ 10.407 s] [INFO] Kyuubi Dev Code Coverage ........................... SUCCESS [ 1.701 s] [INFO] Kyuubi Spark JDBC Dialect plugin ................... SUCCESS [ 6.881 s] [INFO] Kyuubi Dev Spark Authorization Extension ........... SUCCESS [ 21.508 s] [INFO] Kyuubi Dev Spark Authorization Extension Shaded .... SUCCESS [ 0.627 s] [INFO] Kyuubi Project Engine Chat ......................... SUCCESS [ 10.577 s] [INFO] Kyuubi Project Engine Flink SQL .................... SUCCESS [ 22.605 s] [INFO] Kyuubi Project Engine Hive SQL ..................... SUCCESS [ 17.021 s] [INFO] Kyuubi Project Engine JDBC ......................... SUCCESS [ 13.871 s] [INFO] Kyuubi Project Engine Trino ........................ SUCCESS [ 14.339 s] [INFO] Kyuubi Test Integration Tests ...................... SUCCESS [ 0.122 s] [INFO] Kyuubi Test Flink SQL IT ........................... SUCCESS [ 4.563 s] [INFO] Kyuubi Test Hive IT ................................ SUCCESS [ 4.123 s] [INFO] Kyuubi Test Trino IT ............................... SUCCESS [ 3.992 s] [INFO] Kyuubi Project Hive JDBC Shaded Client ............. SUCCESS [ 10.347 s] [INFO] Kyuubi Test Jdbc IT ................................ SUCCESS [ 4.597 s] [INFO] Kyuubi Test Zookeeper IT ........................... SUCCESS [ 2.907 s] [INFO] Kyuubi Project Assembly ............................ SUCCESS [ 1.346 s] [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 07:03 min [INFO] Finished at: 2024-08-09T11:38:13+08:00 [INFO] ------------------------------------------------------------------------ ``` --- # Checklist 📝 - [x] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6598 from pan3793/flink-scala-213. Closes #6598 78fcd2076 [Cheng Pan] Flink engine module supports building with Scala 2.13 Authored-by: Cheng Pan <chengpan@apache.org> Signed-off-by: Cheng Pan <chengpan@apache.org> |
||
|
|
ae467c2b4e
|
[KYUUBI #6557] Support Flink 1.20
# 🔍 Description Flink 1.20 is out, [Release Note](https://nightlies.apache.org/flink/flink-docs-release-1.20/release-notes/flink-1.20/) ## Types of changes 🔖 - [ ] Bugfix (non-breaking change which fixes an issue) - [x] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 CI is updated to cover Flink 1.20 --- # Checklist 📝 - [x] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6557 from pan3793/flink-1.20. Closes #6557 a414094b9 [Cheng Pan] remove rc 8ee4cf5cd [Cheng Pan] fix url fbaf66071 [Cheng Pan] docs ddbd10ffe [Cheng Pan] Support Flink 1.20 Authored-by: Cheng Pan <chengpan@apache.org> Signed-off-by: Cheng Pan <chengpan@apache.org> |
||
|
|
93285f1fdb
|
[KYUUBI #6574] Skip eagerly execute command in PlanOnly mode of Spark Engine
# 🔍 Description ## Issue References 🔗 This pull request fixes #6574 ## Describe Your Solution 🔧 Skip eagerly execute command in physical and execution plan only mode ## Types of changes 🔖 - [ ] Bugfix (non-breaking change which fixes an issue) - [X] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 #### Related Unit Tests added unit test --- # Checklist 📝 - [X] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6575 from wForget/KYUUBI-6574. Closes #6574 6f79228c6 [wforget] fix 9aff4a803 [wforget] fix 839ea4a4f [wforget] fix 8a08c9fa7 [wforget] [KYUUBI #6574] Skip eagerly execute command in PlanOnly mode of Spark Engine Authored-by: wforget <643348094@qq.com> Signed-off-by: Cheng Pan <chengpan@apache.org> |
||
|
|
edff97d3e9
|
[KYUUBI #6549] Correctly handle empty Java options for engines
[KYUUBI #6549] Fix 'Could not find or load main class when launching engine' # 🔍 Description ## Issue References 🔗 This pull request fixes #6549 ## Describe Your Solution 🔧 When obtaining configuration items, if it is null or empty, return none ## Types of changes 🔖 - [ ] Bugfix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 #### Behavior Without This Pull Request ⚰️ #### Behavior With This Pull Request 🎉 #### Related Unit Tests --- # Checklist 📝 - [ ] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6556 from LiJie20190102/launch_engine. Closes #6549 c57a08aff [lijie0203] [KYUUBI #6549] Fix 'Could not find or load main class when launching engine' 642d807e2 [lijie0203] [KYUUBI #6549] Fix 'Could not find or load main class when launching engine' 67926094c [lijie0203] [KYUUBI #6549] Fix 'Could not find or load main class when launching engine' 4ba9fb587 [lijie0203] [KYUUBI #6549] Fix 'Could not find or load main class when launching engine' Authored-by: lijie0203 <lijie@qishudi.com> Signed-off-by: Cheng Pan <chengpan@apache.org> |
||
|
|
063a192c7a
|
[KYUUBI #6545] Deprecate and remove building support for Spark 3.2
# 🔍 Description This pull request aims to remove building support for Spark 3.2, while still keeping the engine support for Spark 3.2. Mailing list discussion: https://lists.apache.org/thread/l74n5zl1w7s0bmr5ovxmxq58yqy8hqzc - Remove Maven profile `spark-3.2`, and references on docs, release scripts, etc. - Keep the cross-version verification to ensure that the Spark SQL engine built on the default Spark version (3.5) still works well on Spark 3.2 runtime. - Merge `kyuubi-extension-spark-common` into `kyuubi-extension-spark-3-3` - Remove `log4j.properties` as Spark moves to Log4j2 since 3.3 (SPARK-37814) ## Types of changes 🔖 - [ ] Bugfix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [x] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 Pass GHA. --- # Checklist 📝 - [x] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6545 from pan3793/deprecate-spark-3.2. Closes #6545 54c172528 [Cheng Pan] fix f4602e805 [Cheng Pan] Deprecate and remove building support for Spark 3.2 2e083f89f [Cheng Pan] fix style 458a92c53 [Cheng Pan] nit 929e1df36 [Cheng Pan] Deprecate and remove building support for Spark 3.2 Authored-by: Cheng Pan <chengpan@apache.org> Signed-off-by: Cheng Pan <chengpan@apache.org> |
||
|
|
8e84c2a72f |
[KYUUBI #6531] Fix SPARK-EngineTab stop/gracefulstop not work
# 🔍 Description ## Issue References 🔗 This pull request fixes #6531 ## Describe Your Solution 🔧 Fix SPARK-EngineTab stop/gracefulstop not work ## Types of changes 🔖 - [x] Bugfix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 #### Behavior Without This Pull Request ⚰️ #### Behavior With This Pull Request 🎉 #### Related Unit Tests --- # Checklist 📝 - [ ] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6533 from promising-forever/patch-1. Closes #6531 66be0fdfd [promising-forever] [KYUUBI #6531] Fix SPARK-EngineTab stop/gracefulstop not work Authored-by: promising-forever <79634622+promising-forever@users.noreply.github.com> Signed-off-by: Cheng Pan <chengpan@apache.org> |
||
|
|
e14d22f13b |
[KYUUBI #6446] Add tests for Spark saveToFile function
# 🔍 Description ## Issue References 🔗 This pull request fixes #6446 ## Describe Your Solution 🔧 Add tests for Spark saveToFile function ## Types of changes 🔖 - [ ] Bugfix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 #### Behavior Without This Pull Request ⚰️ #### Behavior With This Pull Request 🎉 #### Related Unit Tests --- # Checklist 📝 - [ ] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6467 from lsm1/branch-kyuubi-6446. Closes #6446 433f78cb4 [senmiaoliu] fix style f16821c97 [senmiaoliu] add ut for spark engine save result Authored-by: senmiaoliu <senmiaoliu@trip.com> Signed-off-by: senmiaoliu <senmiaoliu@trip.com> |
||
|
|
dfbb6069ca
|
[KYUUBI #6518] Support extracting URL for Spark 4 on YARN
# 🔍 Description ## Issue References 🔗 SPARK-48238 replaced YARN AmIpFilter with a forked implementation, the code should be changed too. ## Types of changes 🔖 - [ ] Bugfix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 Review. --- # Checklist 📝 - [x] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6518 from pan3793/spark-4-url. Closes #6518 c5026500b [Cheng Pan] Support extracting URL for Spark 4 on YARN Authored-by: Cheng Pan <chengpan@apache.org> Signed-off-by: Cheng Pan <chengpan@apache.org> |
||
|
|
ef943ecb3b
|
[KYUUBI #6524] Trino engine supports insecure configuration
# 🔍 Description ## Issue References 🔗 This pull request fixes #6524 ## Describe Your Solution 🔧 Trino engine supports insecure configuration, just as trino client supports --insecure parameter ## Types of changes 🔖 - [x] Bugfix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 #### Behavior Without This Pull Request ⚰️ #### Behavior With This Pull Request 🎉 #### Related Unit Tests --- # Checklist 📝 - [x] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6525 from jiaoqingbo/6524. Closes #6524 b414b2e05 [jiaoqingbo] update settings.md 129d40742 [jiaoqingbo] [KYUUBI #6524] Trino engine supports insecure configuration 24f374b38 [jiaoqingbo] Merge branch 'master' of https://github.com/jiaoqingbo/incubator-kyuubi e89268e4b [jiaoqingbo] [KYUUBI #6508] Add the key-value pairs in optimizedConf to session conf Authored-by: jiaoqingbo <1178404354@qq.com> Signed-off-by: Cheng Pan <chengpan@apache.org> |
||
|
|
66b971f647
|
[KYUUBI #6516] Fix KyuubiSparkUtil.buildURI
# 🔍 Description ## Issue References 🔗 This pull request fixes #6516 ## Describe Your Solution 🔧 1. Using `buildStaticChecked` instead of `build` for static method `fromUri` 2. Using `Array.empty[Object]` instead of empty argument when invoking the build method ## Types of changes 🔖 - [X] Bugfix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 #### Behavior Without This Pull Request ⚰️ Throw error like when running the unit test ``` java.lang.RuntimeException at org.apache.kyuubi.util.reflect.DynMethods$UnboundMethod.invoke(DynMethods.java:80) at org.apache.kyuubi.engine.spark.KyuubiSparkUtil$.buildURI(KyuubiSparkUtil.scala:143) at org.apache.kyuubi.engine.spark.KyuubiSparkUtilSuite.$anonfun$new$1(KyuubiSparkUtilSuite.scala:33) at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85) at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83) at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104) at org.scalatest.Transformer.apply(Transformer.scala:22) at org.scalatest.Transformer.apply(Transformer.scala:20) at org.scalatest.funsuite.AnyFunSuiteLike$$anon$1.apply(AnyFunSuiteLike.scala:226) at org.scalatest.TestSuite.withFixture(TestSuite.scala:196) at org.scalatest.TestSuite.withFixture$(TestSuite.scala:195) at org.scalatest.funsuite.AnyFunSuite.withFixture(AnyFunSuite.scala:1564) at org.scalatest.funsuite.AnyFunSuiteLike.invokeWithFixture$1(AnyFunSuiteLike.scala:224) at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTest$1(AnyFunSuiteLike.scala:236) at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306) at org.scalatest.funsuite.AnyFunSuiteLike.runTest(AnyFunSuiteLike.scala:236) at org.scalatest.funsuite.AnyFunSuiteLike.runTest$(AnyFunSuiteLike.scala:218) at org.scalatest.funsuite.AnyFunSuite.runTest(AnyFunSuite.scala:1564) at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTests$1(AnyFunSuiteLike.scala:269) at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:413) at scala.collection.immutable.List.foreach(List.scala:431) at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401) at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:396) at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:475) at org.scalatest.funsuite.AnyFunSuiteLike.runTests(AnyFunSuiteLike.scala:269) at org.scalatest.funsuite.AnyFunSuiteLike.runTests$(AnyFunSuiteLike.scala:268) at org.scalatest.funsuite.AnyFunSuite.runTests(AnyFunSuite.scala:1564) at org.scalatest.Suite.run(Suite.scala:1114) at org.scalatest.Suite.run$(Suite.scala:1096) at org.scalatest.funsuite.AnyFunSuite.org$scalatest$funsuite$AnyFunSuiteLike$$super$run(AnyFunSuite.scala:1564) at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$run$1(AnyFunSuiteLike.scala:273) at org.scalatest.SuperEngine.runImpl(Engine.scala:535) at org.scalatest.funsuite.AnyFunSuiteLike.run(AnyFunSuiteLike.scala:273) at org.scalatest.funsuite.AnyFunSuiteLike.run$(AnyFunSuiteLike.scala:272) at org.scalatest.funsuite.AnyFunSuite.run(AnyFunSuite.scala:1564) at org.scalatest.tools.SuiteRunner.run(SuiteRunner.scala:47) at org.scalatest.tools.Runner$.$anonfun$doRunRunRunDaDoRunRun$13(Runner.scala:1321) at org.scalatest.tools.Runner$.$anonfun$doRunRunRunDaDoRunRun$13$adapted(Runner.scala:1315) at scala.collection.immutable.List.foreach(List.scala:431) at org.scalatest.tools.Runner$.doRunRunRunDaDoRunRun(Runner.scala:1315) at org.scalatest.tools.Runner$.$anonfun$runOptionallyWithPassFailReporter$24(Runner.scala:992) at org.scalatest.tools.Runner$.$anonfun$runOptionallyWithPassFailReporter$24$adapted(Runner.scala:970) at org.scalatest.tools.Runner$.withClassLoaderAndDispatchReporter(Runner.scala:1481) at org.scalatest.tools.Runner$.runOptionallyWithPassFailReporter(Runner.scala:970) at org.scalatest.tools.Runner$.run(Runner.scala:798) at org.scalatest.tools.Runner.run(Runner.scala) at org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.runScalaTest2or3(ScalaTestRunner.java:43) at org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.main(ScalaTestRunner.java:26) ``` #### Behavior With This Pull Request 🎉 Test passed #### Related Unit Tests --- # Checklist 📝 - [ ] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6517 from jeanlyn/issue-6516. Closes #6516 c6bf8b622 [jeanlyn] adjust e50e58531 [jeanlyn] adjust 8afdc0c97 [jeanlyn] fix a61ae7012 [jeanlyn] fix buildURI error and adding unit tests Authored-by: jeanlyn <me@jeanlyn.com> Signed-off-by: Cheng Pan <chengpan@apache.org> |
||
|
|
ab273c8ba3
|
[KYUUBI #6008] RESTful API supports killing engine forcibly
# 🔍 Description ## Issue References 🔗 ## Describe Your Solution 🔧 I'd like to introduce the feature that allows users to forcibly kill an engine through API. ## Types of changes 🔖 - [ ] Bugfix (non-breaking change which fixes an issue) - [x] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 #### Behavior Without This Pull Request ⚰️ #### Behavior With This Pull Request 🎉 #### Related Unit Tests --- # Checklist 📝 - [x] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6008 from zhaohehuhu/dev-0123. Closes #6008 00c208a26 [Cheng Pan] fix 8721a2d2a [Cheng Pan] log efc7587f7 [Cheng Pan] client cd5129db3 [Cheng Pan] fix ut 5e1b6a161 [Cheng Pan] Update kyuubi-server/src/test/scala/org/apache/kyuubi/server/api/v1/AdminResourceSuite.scala 72d7df357 [Cheng Pan] Update kyuubi-server/src/main/scala/org/apache/kyuubi/engine/ApplicationOperation.scala 6d5d08710 [Cheng Pan] Update kyuubi-server/src/main/scala/org/apache/kyuubi/engine/ApplicationOperation.scala b013194d1 [zhaohehuhu] move the position of log 0cdeede7a [zhaohehuhu] restore ENGINE_SPARK_REGISTER_ATTRIBUTES f826d0515 [zhaohehuhu] reformat a13466e37 [zhaohehuhu] update doc and log string encoded 3a2f5970a [zhaohehuhu] refactor ae24ea74d [zhaohehuhu] refactor UT 936a54e27 [Wang, Fei] register app mgr info 9bacc2c8b [hezhao2] fix UTs 11106d75b [Wang, Fei] comments ba57c2c3f [hezhao2] refactor code to delete the node and then kill application 634ceb677 [hezhao2] reformat ab31382ee [hezhao2] reformat 513bcdc57 [hezhao2] fix UT 506220654 [hezhao2] get refId by user, sharelevel and subdomain 3ad9577df [hezhao2] rename params to support multiple engines 632c56b88 [hezhao2] fix unused import bd7bb45f0 [hezhao2] refactor fb9b25176 [hezhao2] add default value for forceKill param 070aad06f [hezhao2] refactor 51827ecde [hezhao2] fix UT f11e7657e [hezhao2] add an UT 8a65cf113 [hezhao2] refactor code d6f82ff9a [hezhao2] refactor code f3ab9c546 [hezhao2] new parameter added to decide whether to kill forcefully handle the result of killApplication 5faa5b54f [hezhao2] kill engine forcibly Lead-authored-by: hezhao2 <hezhao2@cisco.com> Co-authored-by: zhaohehuhu <luoyedeyi459@163.com> Co-authored-by: Cheng Pan <chengpan@apache.org> Co-authored-by: Cheng Pan <pan3793@gmail.com> Co-authored-by: Wang, Fei <fwang12@ebay.com> Signed-off-by: Cheng Pan <chengpan@apache.org> |
||
|
|
315adda353
|
[KYUUBI #6499] Rewrite some utility methods in Java
# 🔍 Description This PR rewrites some utility methods in Java, specifically, ``` Utils.isWindows Utils.isMac Utils.findLocalInetAddress ``` and moves them from `kyuubi-common`'s `Utils` to the `kyuubi-util`'s `JavaUtils`, so that they could be used in other modules that do not depend on `kyuubi-common`. ## Types of changes 🔖 - [ ] Bugfix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 Pass GHA. --- # Checklist 📝 - [x] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6499 from pan3793/javautils. Closes #6499 565936def [Cheng Pan] fix f06a85e9f [Cheng Pan] Move some untiliy methods in Java Authored-by: Cheng Pan <chengpan@apache.org> Signed-off-by: Cheng Pan <chengpan@apache.org> |
||
|
|
0a53415d92
|
[KYUUBI #6469] Lazily initialize RecordReaderIterator to avoid driver oom when fetching big result set
# 🔍 Description ## Issue References 🔗 This pull request fixes https://github.com/apache/kyuubi/issues/6469 ## Describe Your Solution 🔧 Instead of initializing all RecordReaderIterator when create OrcFileIterator,we can lazily initialize the RecordReaderIterator to make sure that there is only one RecordReaderIterator which reads file current fetching by client in driver memory. ## Types of changes 🔖 - [x] Bugfix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 #### Behavior Without This Pull Request ⚰️ #### Behavior With This Pull Request 🎉 #### Related Unit Tests --- # Checklist 📝 - [x] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) Closes #6470 from Z1Wu/bugfix-fetch-big-resultset-lazily. Closes #6469 83208018c [吴梓溢] update 56284e68e [吴梓溢] update Authored-by: 吴梓溢 <wuziyi02@corp.netease.com> Signed-off-by: Cheng Pan <chengpan@apache.org> |
||
|
|
c6f2ca273c
|
[KYUUBI #6476] Fix incomplete app events deserialization in SHS
# 🔍 Description ## Issue References 🔗 This pull request fixes #6476 : spark historyserver -> Show incomplete applications -> kyuubi query engine ui error(java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.Long). The reason: it's related to https://github.com/FasterXML/jackson-module-scala/issues/62 ## Describe Your Solution 🔧 add JsonDeserialize(contentAs = classOf[java.lang.Long]) annotation ## Types of changes 🔖 - [x] Bugfix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 #### Behavior Without This Pull Request ⚰️ #### Behavior With This Pull Request 🎉 #### Related Unit Tests --- # Checklist 📝 - [x] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6479 from felixzh2020/issues/6476. Closes #6476 034bfe53c [felixzh] [KYUUBI apache#6476] spark historyserver Show incomplete applications kyuubi query engine ui error b7b0db278 [felixzh] [KYUUBI apache#6476] spark historyserver Show incomplete applications kyuubi query engine ui error a66163a5a [felixzh] [KYUUBI apache#6476] spark historyserver Show incomplete applications kyuubi query engine ui error Authored-by: felixzh <felixzh2020@126.com> Signed-off-by: Cheng Pan <chengpan@apache.org> |
||
|
|
efdb67ff39
|
[KYUUBI #6302][FOLLOWUP] Skip spark job group cancellation on incremental collect mode
# 🔍 Description ## Issue References 🔗 This pull request fixes https://github.com/apache/kyuubi/pull/6473#discussion_r1642652411 ## Describe Your Solution 🔧 add a configuration to control whether to skip the cancellation here for incremental collect queries, skipping by default for safety. ## Types of changes 🔖 - [x] Bugfix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 #### Behavior Without This Pull Request ⚰️ #### Behavior With This Pull Request 🎉 #### Related Unit Tests --- # Checklist 📝 - [x] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6482 from XorSum/features/skip-cancel-incremental. Closes #6302 440311f07 [xorsum] reformat edbc37868 [bkhan] Update externals/kyuubi-spark-sql-engine/src/main/scala/org/apache/kyuubi/engine/spark/operation/ExecuteStatement.scala d6c99366c [xorsum] one line 9f40405c7 [xorsum] update configuration b1526319e [xorsum] skip job group cancellation on incremental collect mode Lead-authored-by: xorsum <xorsum@outlook.com> Co-authored-by: bkhan <bkhan@trip.com> Signed-off-by: Cheng Pan <chengpan@apache.org> |
||
|
|
7de6371d9a
|
[KYUUBI #6302] Call cancelJobGroup immediately after statement execution finished
# 🔍 Description ## Issue References 🔗 This pull request fixes #6302 : when the SQL is submitted by Kyuubi Rest Api (the same kyuubi path with BEELINE), the legacy job will not be cancelled, https://github.com/apache/kyuubi/issues/6302#issuecomment-2160572624 . The reason: Beeline session calls `cancelJobGroup` in `SparkOperation#cleanup`. But Rest Api session doesn't call `SparkOperation#cleanup`, so the legacy job submitted by Rest Api will not be canceled. ## Describe Your Solution 🔧 The modification: call `SparkOperation#cleanup` in `ExecuteStatement#executeStatement`'s finally clause. ## Types of changes 🔖 - [x] Bugfix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 #### Behavior Without This Pull Request ⚰️ #### Behavior With This Pull Request 🎉 #### Related Unit Tests --- # Checklist 📝 - [x] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6473 from XorSum/features/cancel-after-execute-stmt. Closes #6302 16dd508e4 [xorsum] operation executeStatement cancel group Authored-by: xorsum <xorsum@outlook.com> Signed-off-by: Cheng Pan <chengpan@apache.org> |
||
|
|
017d8ccd7e
|
[KYUUBI #6458] Remove commons-logging from binary release
# 🔍 Description [`jcl-over-slf4j`](https://www.slf4j.org/legacy.html#jcl-over-slf4j) is a drop-in replacement of `commons-logging`, the latter one should not be present in the final classpath, otherwise, there are potential class conflict issues. The current dep check is problematic, this PR also changes it to always perform "install" to fix the false negative report. ## Types of changes 🔖 - [x] Bugfix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 Simply delete `commons-logging-1.1.3.jar` from `apache-kyuubi-1.9.1-bin.tgz` and everything goes well. --- # Checklist 📝 - [x] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6458 from pan3793/commons-logging. Closes #6458 114ec766a [Cheng Pan] fix 79d4121a1 [Cheng Pan] fix 6633e83ee [Cheng Pan] fix 21127ed0b [Cheng Pan] always perform install on dep check 98b13dfcf [Cheng Pan] Remove commons-logging from binary release Authored-by: Cheng Pan <chengpan@apache.org> Signed-off-by: Cheng Pan <chengpan@apache.org> |
||
|
|
fe5377e0fa
|
[KYUUBI #5957] Flink engine should not load kyuubi-defaults.conf
# 🔍 Description
This is the root cause of #5957. Which is accidentally introduced in
|
||
|
|
a586cb4452
|
[KYUUBI #6353] Catch exception for closing flink internal session
# 🔍 Description ## Issue References 🔗 This pull request fixes #6353 ## Describe Your Solution 🔧 Catch exception for closing flink internal session. ## Types of changes 🔖 - [X] Bugfix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 #### Behavior Without This Pull Request ⚰️ #### Behavior With This Pull Request 🎉 #### Related Unit Tests --- # Checklist 📝 - [X] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6354 from wForget/KYUUBI-6353. Closes #6353 32fc9afd9 [wforget] [KYUUBI #6353] Catch exception for closing flink internal session Authored-by: wforget <643348094@qq.com> Signed-off-by: Cheng Pan <chengpan@apache.org> |
||
|
|
a07c57f064
|
[KYUUBI #6427] Extract data lake artifact names as maven properties
# 🔍 Description
Improve data lake dependency management by extracting the following Maven properties:
- `delta.artifact`
- `hudi.artifact`
- `iceberg.artifact`
- `paimon.artifact`
It often takes a while for the downstream data lakes to support the new Spark versions, extracting those properties makes it easy to override in the new profile on the Kyuubi project's `pom.xml` to workaround before data lakes jars are available.
One use case is
|
||
|
|
1fb1f854eb
|
[KYUUBI #6439] kyuubi-util-scala test jar leaked to compile scope
# 🔍 Description The `kyuubi-util-scala_2.12-<version>-tests.jar` accidentally leaked to the compile scope but should be in the test scope. ## Types of changes 🔖 - [x] Bugfix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 Run `build/dist` and check `dist/jars` --- # Checklist 📝 - [x] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6439 from pan3793/util-scala-test. Closes #6439 0576248f5 [Cheng Pan] fix 2bf2408f5 [Cheng Pan] fix f7151dfc6 [Cheng Pan] kyuubi-util-scala test jar leaked to compile scope Authored-by: Cheng Pan <chengpan@apache.org> Signed-off-by: Cheng Pan <chengpan@apache.org> |
||
|
|
71649daedc
|
[KYUUBI #6437] Fix Spark engine query result save to HDFS
# 🔍 Description ## Issue References 🔗 This pull request fixes #6437 ## Describe Your Solution 🔧 Use `org.apache.hadoop.fs.Path` instead of `java.nio.file.Paths` to avoid `OPERATION_RESULT_SAVE_TO_FILE_DIR` scheme unexpected change. ## Types of changes 🔖 - [x] Bugfix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 #### Behavior Without This Pull Request ⚰️ Spark Job failed to start with error: `java.io.IOException: JuiceFS initialized failed for jfs:///` with conf `kyuubi.operation.result.saveToFile.dir=jfs://datalake/tmp`. `hdfs://xxx:port/tmp` may encounter similar errors #### Behavior With This Pull Request 🎉 User Can use hdfs dir as `kyuubi.operation.result.saveToFile.dir` without error. #### Related Unit Tests Seems no test suites added in #5591 and #5986, I'll try to build a dist and test with our internal cluster. --- # Checklist 📝 - [x] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6444 from camper42/save-to-hdfs. Closes #6437 990f0a728 [camper42] [Kyuubi #6437] Fix Spark engine query result save to HDFS Authored-by: camper42 <camper.xlii@gmail.com> Signed-off-by: Cheng Pan <chengpan@apache.org> |
||
|
|
1e08064123
|
[KYUUBI #6425] Fix tests in spark engine and kyuubi server modules with Spark 4.0
# 🔍 Description This PR fixes tests in spark engine and kyuubi server modules with Spark 4.0. ## Types of changes 🔖 - [ ] Bugfix (non-breaking change which fixes an issue) - [x] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 Since Spark 4.0.0-preview1 is still under voting phase, this PR does not add CI, the change was tested in https://github.com/apache/kyuubi/pull/6407 with Spark 4.0.0-preview1 RC1 --- # Checklist 📝 - [x] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6425 from pan3793/spark-4. Closes #6425 101986416 [Cheng Pan] Fix tests in spark engine and kyuubi server modules with Spark 4.0 Authored-by: Cheng Pan <chengpan@apache.org> Signed-off-by: Cheng Pan <chengpan@apache.org> |
||
|
|
4cbecdc12f
|
[KYUUBI #6367] Flink SQL engine supports RenewDelegationToken
# 🔍 Description ## Issue References 🔗 This pull request fixes #6367 ## Describe Your Solution 🔧 + Implement `RenewDelegationToken` method in `FlinkTBinaryFrontendService`. + Pass `kyuubi.engine.credentials` configuration when starting flink engine. ## Types of changes 🔖 - [ ] Bugfix (non-breaking change which fixes an issue) - [X] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 #### Behavior Without This Pull Request ⚰️ #### Behavior With This Pull Request 🎉 test connection: ``` "jdbc:hive2://hadoop-master1.orb.local:10009/default;hive.server2.proxy.user=spark;principal=kyuubi/_HOSTTEST.ORG?kyuubi.engine.type=FLINK_SQL;flink.execution.target=yarn-application" ``` flink engine builder command:  jobmanager log: ``` 2024-05-22 07:46:46,545 INFO org.apache.kyuubi.engine.flink.FlinkTBinaryFrontendService [] - Add new unknown token Kind: HIVE_DELEGATION_TOKEN, Service: , Ident: 00 05 73 70 61 72 6b 04 68 69 76 65 28 6b 79 75 75 62 69 2f 68 61 64 6f 6f 70 2d 6d 61 73 74 65 72 31 2e 6f 72 62 2e 6c 6f 63 61 6c 40 54 45 53 54 2e 4f 52 47 8a 01 8f 9f 3f d5 4c 8a 01 8f c3 4c 59 4c 0b 06 2024-05-22 07:46:46,547 WARN org.apache.kyuubi.engine.flink.FlinkTBinaryFrontendService [] - Ignore token with earlier issue date: Kind: HDFS_DELEGATION_TOKEN, Service: 172.20.0.5:8020, Ident: (token for spark: HDFS_DELEGATION_TOKEN owner=spark, renewer=spark, realUser=kyuubi/hadoop-master1.orb.localTEST.ORG, issueDate=1716363711750, maxDate=1716968511750, sequenceNumber=15, masterKeyId=7) 2024-05-22 07:46:46,548 INFO org.apache.kyuubi.engine.flink.FlinkTBinaryFrontendService [] - Update delegation tokens. The number of tokens sent by the server is 2. The actual number of updated tokens is 1. ``` #### Related Unit Tests --- # Checklist 📝 - [X] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6371 from wForget/KYUUBI-6367. Closes #6367 83b402aa0 [wforget] Revert "change Base64 encoder/decoder" f5c08eb45 [wforget] change Base64 encoder/decoder e8c66dfc5 [wforget] fix test e59820b3e [wforget] [KYUUBI #6367] Support RenewDelegationToken for flink sql engine Authored-by: wforget <643348094@qq.com> Signed-off-by: Cheng Pan <chengpan@apache.org> |
||
|
|
5b592d07ca
|
[KYUUBI #6404] Fix HiveResult.toHiveString compatibility for Spark 4.0
# 🔍 Description SPARK-47911 introduced breaking changes for `HiveResult.toHiveString`, here we use reflection to fix the compatibility. ## Types of changes 🔖 - [x] Bugfix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 ``` build/mvn clean install -Pscala-2.13 -Pspark-master \ -pl externals/kyuubi-spark-sql-engine -am \ -Dtest=none -DwildcardSuites=org.apache.kyuubi.engine.spark.schema.RowSetSuite ``` before - compilation error ``` [INFO] --- scala-maven-plugin:4.8.0:compile (scala-compile-first) kyuubi-spark-sql-engine_2.13 --- ... [ERROR] [Error] /home/kyuubi/apache-kyuubi/externals/kyuubi-spark-sql-engine/src/main/scala/org/apache/kyuubi/engine/spark/schema/RowSet.scala:30: not enough arguments for method toHiveString: (a: (Any, org.apache.spark.sql.types.DataType), nested: Boolean, formatters: org.apache.spark.sql.execution.HiveResult.TimeFormatters, binaryFormatter: org.apache.spark.sql.execution.HiveResult.BinaryFormatter): String. Unspecified value parameter binaryFormatter. ``` after - UT pass ``` [INFO] --- scalatest-maven-plugin:2.2.0:test (test) kyuubi-spark-sql-engine_2.13 --- [INFO] ScalaTest report directory: /home/kyuubi/apache-kyuubi/externals/kyuubi-spark-sql-engine/target/surefire-reports Discovery starting. Discovery completed in 1 second, 959 milliseconds. Run starting. Expected test count is: 3 RowSetSuite: - column based set - row based set - to row set Run completed in 2 seconds, 712 milliseconds. Total number of tests run: 3 Suites: completed 2, aborted 0 Tests: succeeded 3, failed 0, canceled 0, ignored 0, pending 0 All tests passed. ``` --- # Checklist 📝 - [x] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6404 from pan3793/hive-string. Closes #6404 6b3c743eb [Cheng Pan] fix breaking change of HiveResult.toHiveString caused by SPARK-47911 Authored-by: Cheng Pan <chengpan@apache.org> Signed-off-by: Cheng Pan <chengpan@apache.org> |
||
|
|
9c1b779b10
|
[KYUUBI #6405] Spark engine supports both javax and jakarta ws.rs namespaces
# 🔍 Description Spark 4.0 upgraded Jersey from 2 to 3, and also migrated from `javax.ws.rs` to `jakarta.ws.rs` in SPARK-47118, this break the Spark SQL engine complication with Spark 4.0 ## Types of changes 🔖 - [x] Bugfix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 ``` build/mvn clean install -Pscala-2.13 -Pspark-master \ -pl externals/kyuubi-spark-sql-engine -am -DskipTests ``` before ``` [INFO] --- scala-maven-plugin:4.8.0:compile (scala-compile-first) kyuubi-spark-sql-engine_2.13 --- [INFO] Compiler bridge file: /home/kyuubi/.sbt/1.0/zinc/org.scala-sbt/org.scala-sbt-compiler-bridge_2.13-1.8.0-bin_2.13.8__61.0-1.8.0_20221110T195421.jar [INFO] compiler plugin: BasicArtifact(com.github.ghik,silencer-plugin_2.13.8,1.7.13,null) [INFO] compiling 61 Scala sources to /home/kyuubi/apache-kyuubi/externals/kyuubi-spark-sql-engine/target/scala-2.13/classes ... [ERROR] [Error] /home/kyuubi/apache-kyuubi/externals/kyuubi-spark-sql-engine/src/main/scala/org/apache/kyuubi/engine/spark/operation/ExecutePython.scala:27: object ws is not a member of package javax [ERROR] [Error] /home/kyuubi/apache-kyuubi/externals/kyuubi-spark-sql-engine/src/main/scala/org/apache/kyuubi/engine/spark/operation/ExecutePython.scala:307: not found: value UriBuilder [ERROR] [Error] /home/kyuubi/apache-kyuubi/externals/kyuubi-spark-sql-engine/src/main/scala/org/apache/kyuubi/engine/spark/operation/ExecutePython.scala:320: not found: value UriBuilder ``` after ``` [INFO] --- scala-maven-plugin:4.8.0:compile (scala-compile-first) kyuubi-spark-sql-engine_2.13 --- [INFO] Compiler bridge file: /home/kyuubi/.sbt/1.0/zinc/org.scala-sbt/org.scala-sbt-compiler-bridge_2.13-1.8.0-bin_2.13.8__61.0-1.8.0_20221110T195421.jar [INFO] compiler plugin: BasicArtifact(com.github.ghik,silencer-plugin_2.13.8,1.7.13,null) [INFO] compiling 61 Scala sources to /home/kyuubi/apache-kyuubi/externals/kyuubi-spark-sql-engine/target/scala-2.13/classes ... [INFO] compile in 19.2 s ``` --- # Checklist 📝 - [x] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6405 from pan3793/jersey. Closes #6405 6cce23b01 [Cheng Pan] SPARK-47118 Jersey Authored-by: Cheng Pan <chengpan@apache.org> Signed-off-by: Cheng Pan <chengpan@apache.org> |
||
|
|
586f6008bd
|
[KYUUBI #6399] Spark Kyuubi UI supports both javax and jakarta servlet namespaces
# 🔍 Description Spark 4.0 migrated from `javax.servlet` to `jakarta.servlet` in SPARK-47118, which breaks the binary compatibility of `SparkUITab` and `WebUIPage` that Kyuubi used, thus breaking the previous assumption of Kyuubi Spark SQL engine: single jar built with default Spark version, compatible with all supported versions of Spark runtime. ## Describe Your Solution 🔧 This PR uses bytebuddy to dynamically generate classes and Java reflection find and dispatch method invocation in runtime, to recover the existing compatibility of Kyuubi Spark SQL engine. ## Types of changes 🔖 - [ ] Bugfix (non-breaking change which fixes an issue) - [x] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 Build with Spark 3.5 ``` build/dist --tgz --web-ui --spark-provided --flink-provided --hive-provided -Pspark-3.5 ``` It produces both Scala 2.12 and 2.13 Spark SQL engine jars - `kyuubi-spark-sql-engine_2.12-1.10.0-SNAPSHOT.jar` - `kyuubi-spark-sql-engine_2.13-1.10.0-SNAPSHOT.jar` Run with Spark 3.4 Scala 2.12 <img width="1639" alt="image" src="https://github.com/apache/kyuubi/assets/26535726/caeef30d-7467-4942-a56a-88a7c93ef7cc"> Run with Spark 3.5 Scala 2.13 <img width="1639" alt="image" src="https://github.com/apache/kyuubi/assets/26535726/c339c1e9-c07f-4952-9a57-098b832c889f"> Run with Spark 4.0.0-preview1 Scala 2.13 <img width="1639" alt="image" src="https://github.com/apache/kyuubi/assets/26535726/a3fb6e77-b27e-4634-8acf-245a26b39d2b"> --- # Checklist 📝 - [x] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6399 from pan3793/ui-4.0. Closes #6399 e0104f6df [Cheng Pan] nit a2f9df4fa [Cheng Pan] nit c369ab2e3 [Cheng Pan] nit ec1c45f66 [Cheng Pan] nit 3e05744d6 [Cheng Pan] fix a7e38cc1e [Cheng Pan] nit fa14a0d98 [Cheng Pan] refactor 9d0ce6111 [Cheng Pan] A work version fc78b58e4 [Cheng Pan] fix startup d74c1c0fe [Cheng Pan] fix 50066f563 [Cheng Pan] nit f5ad4c760 [Cheng Pan] Kyuubi UI supports Spark 4.0 Authored-by: Cheng Pan <chengpan@apache.org> Signed-off-by: Cheng Pan <chengpan@apache.org> |
||
|
|
b810bcc7ea
|
Revert "[KYUUBI #6390] Temporarily disable UI Tab for Spark 4.0 and above"
This reverts commit
|
||
|
|
97aa4f7e1f |
[KYUUBI #6400] Fix memory leak when using saveToFile
# 🔍 Description ## Issue References 🔗 Fix memory leak when using saveToFile mode. FYI: https://stackoverflow.com/questions/45649044/scala-stream-iterate-and-memory-management Stream is IterableAgain, which means, that it will keep all the elements you iterate through in case you want to see them again. ## Describe Your Solution 🔧 Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change. ## Types of changes 🔖 - [ ] Bugfix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 #### Behavior Without This Pull Request ⚰️ #### Behavior With This Pull Request 🎉 #### Related Unit Tests --- # Checklist 📝 - [x] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6400 from turboFei/memory_leak. Closes #6400 cdea358d6 [Wang, Fei] fix memory leak Authored-by: Wang, Fei <fwang12@ebay.com> Signed-off-by: Wang, Fei <fwang12@ebay.com> |