Compare commits

...

10 Commits

Author SHA1 Message Date
wuziyi
f7e10e65d3
[KYUUBI #7153] Share JAAS configuration for Zookeeper client to avoid server OOM
Some checks failed
CI / Kyuubi and Spark Test (verify-on-spark-3.3-binary, -Dmaven.plugin.scalatest.exclude.tags=org.scalatest.tags.Slow,org.apache.kyuubi.tags.DeltaTest,org.apache.kyuubi.tags.IcebergTest,org.apache.kyuubi.tags.PaimonTest,org.apache.kyuubi.tags.HudiTest,org.a… (push) Has been cancelled
CI / Kyuubi and Spark Test (verify-on-spark-3.4-binary, -Dmaven.plugin.scalatest.exclude.tags=org.scalatest.tags.Slow,org.apache.kyuubi.tags.DeltaTest,org.apache.kyuubi.tags.IcebergTest,org.apache.kyuubi.tags.PaimonTest,org.apache.kyuubi.tags.SparkLocalClus… (push) Has been cancelled
CI / Kyuubi and Spark Test (verify-on-spark-4.0-binary, -Dmaven.plugin.scalatest.exclude.tags=org.scalatest.tags.Slow,org.apache.kyuubi.tags.DeltaTest,org.apache.kyuubi.tags.IcebergTest,org.apache.kyuubi.tags.PaimonTest,org.apache.kyuubi.tags.SparkLocalClus… (push) Has been cancelled
CI / Scala Test (8, 2.13, 3.5) (push) Has been cancelled
CI / Spark Connector Cross Version Test (normal, 17, 2.12, 3.5, 3.3) (push) Has been cancelled
CI / Spark Connector Cross Version Test (normal, 17, 2.12, 3.5, 3.4) (push) Has been cancelled
CI / Spark Connector Cross Version Test (normal, 17, 2.13, 3.5, 3.3) (push) Has been cancelled
CI / Spark Connector Cross Version Test (normal, 17, 2.13, 3.5, 3.4) (push) Has been cancelled
CI / Spark Connector Cross Version Test (normal, 17, 2.13, 3.5, 4.0) (push) Has been cancelled
CI / Flink Test (normal, 1.17, , 8) (push) Has been cancelled
CI / Flink Test (normal, 1.18, , 8) (push) Has been cancelled
CI / Flink Test (normal, 1.19, , 8) (push) Has been cancelled
CI / Flink Test (normal, 1.20, , 8) (push) Has been cancelled
CI / Flink Test (verify-on-flink-1.17-binary, 1.20, -Dflink.archive.mirror=https://archive.apache.org/dist/flink/flink-1.17.2 -Dflink.archive.name=flink-1.17.2-bin-scala_2.12.tgz, 8) (push) Has been cancelled
CI / Flink Test (verify-on-flink-1.18-binary, 1.20, -Dflink.archive.mirror=https://archive.apache.org/dist/flink/flink-1.18.1 -Dflink.archive.name=flink-1.18.1-bin-scala_2.12.tgz, 8) (push) Has been cancelled
CI / Flink Test (verify-on-flink-1.19-binary, 1.20, -Dflink.archive.mirror=https://archive.apache.org/dist/flink/flink-1.19.1 -Dflink.archive.name=flink-1.19.1-bin-scala_2.12.tgz, 8) (push) Has been cancelled
CI / Hive Test (normal, , 8) (push) Has been cancelled
CI / Hive Test (verify-on-hive-2.1-cdh6-binary, -Dhive.archive.mirror=https://github.com/pan3793/cdh-hive/releases/download/cdh6.3.2-release -Dhive.archive.name=apache-hive-2.1.1-cdh6.3.2-bin.tar.gz, 8) (push) Has been cancelled
CI / Hive Test (verify-on-hive-2.3-binary, -Dhive.archive.mirror=https://archive.apache.org/dist/hive/hive-2.3.10 -Dhive.archive.name=apache-hive-2.3.10-bin.tar.gz, 8) (push) Has been cancelled
CI / JDBC Trino TPC Tests (normal, 8) (push) Has been cancelled
CI / Kyuubi Server On Kubernetes Integration Test (push) Has been cancelled
CI / Spark Engine On Kubernetes Integration Test (push) Has been cancelled
CI / Zookeeper Integration Test (normal, 8, 3.4) (push) Has been cancelled
CI / Zookeeper Integration Test (normal, 8, 3.5) (push) Has been cancelled
CI / Zookeeper Integration Test (normal, 8, 3.6) (push) Has been cancelled
CI / Zookeeper Integration Test (normal, 8, 3.7) (push) Has been cancelled
Python Client / unit-test (3.10) (push) Has been cancelled
Python Client / unit-test (3.8) (push) Has been cancelled
Python Client / unit-test (3.9) (push) Has been cancelled
Web UI / Kyuubi Web UI check (push) Has been cancelled
### Why are the changes needed?

Sharing jaas configuration for zookeeper client with same keytab and principal to avoid server oom due to nested jaas configuration.

fix issue https://github.com/apache/kyuubi/issues/7153

### How was this patch tested?

ut

### Was this patch authored or co-authored using generative AI tooling?

no

Closes #7154 from Z1Wu/fix/comm_reuse_zk_jass.

Closes #7153

3b0169a00 [Cheng Pan] Update kyuubi-ha/src/main/scala/org/apache/kyuubi/ha/client/zookeeper/ZookeeperClientProvider.scala
5873d12f3 [Cheng Pan] Update kyuubi-ha/src/main/scala/org/apache/kyuubi/ha/client/zookeeper/ZookeeperClientProvider.scala
0d8a18a4e [wuziyi] nit
ffa7d29fc [wuziyi] [fix] share jaas configuration for zookeeper client with same keytab and principal to avoid server oom due to recursive jaas configuration.

Lead-authored-by: wuziyi <wuziyi02@corp.netease.com>
Co-authored-by: Cheng Pan <pan3793@gmail.com>
Signed-off-by: Cheng Pan <chengpan@apache.org>
2025-08-15 14:21:46 +08:00
chenliang.lu
fb2b7ef9c6 [KYUUBI #7126][LINEAGE] Support merge into syntax in row level catalog
### Why are the changes needed?

In Catalog which supports row level interface (iceberg etc.), merge into will be rewritten as WriteDelta or ReplaceData by rule . We should support the extraction of lineage relationship under this type.

### How was this patch tested?

add new tests for row-level catalog

### Was this patch authored or co-authored using generative AI tooling?

no

Closes #7127 from yabola/master-listener.

Closes #7126

e39d0d93b [chenliang.lu] add for notMatchedInstructions
1530d7c6c [chenliang.lu] add notMatchedInstructions
00660f83b [chenliang.lu] update spark version
db7d9ca85 [chenliang.lu] update spark version && optimize some code
85de5b069 [chenliang.lu] remove useless logger && fix ut
79d2f9bfa [chenliang.lu] fix check style
2f5e62cdd [chenliang.lu] [KYUUBI #7126][LINEAGE] Support merge into syntax in row level catalog

Authored-by: chenliang.lu <chenlianglu@tencent.com>
Signed-off-by: wforget <643348094@qq.com>
2025-08-13 17:35:41 +08:00
wforget
4c1412bdd0 [KYUUBI #7168] Adapt PermanentViewMarker introduced by authz plugin in lineage plugin
### Why are the changes needed?

Fix the lineage plugin cannot capture lineage of view after integrating authz plugin.

closes #7168

### How was this patch tested?

added unit test

### Was this patch authored or co-authored using generative AI tooling?

No

Closes #7169 from wForget/KYUUBI-7168.

Closes #7168

42ac01639 [wforget] fix test
208550a3e [wforget] [KYUUBI-7168] Adapt PermanentViewMarker introduced by authz plugin in lineage plugin

Authored-by: wforget <643348094@qq.com>
Signed-off-by: wforget <643348094@qq.com>
2025-08-13 16:58:44 +08:00
Cheng Pan
6eb24bc47a
[KYUUBI #7164] Bump Byte Buddy 1.17.6
### Why are the changes needed?

Support JDK 25

Full Release Notes: https://github.com/raphw/byte-buddy/releases

### How was this patch tested?

Pass GHA.

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes #7164 from pan3793/bytebuddy-1.17.6.

Closes #7164

9d4f45a8c [Cheng Pan] Revert "Bump maven shade plugin 3.6.0"
d3dc66862 [Cheng Pan] upgrade asm for maven-shade-plugin
9afe01915 [Cheng Pan] Bump maven shade plugin 3.6.0
1b8a99d71 [Cheng Pan] Bump Byte Buddy 1.17.6

Authored-by: Cheng Pan <chengpan@apache.org>
Signed-off-by: Cheng Pan <chengpan@apache.org>
2025-08-08 01:22:29 +08:00
Wang, Fei
b31663f569 [KYUUBI #7163][SPARK] Check whether engine context stopped in engine terminating checker
### Why are the changes needed?

To close #7163, in this PR, it checks whether engine context stopped in engine terminating checker.
1. Spark context stooped dut to OOM in `spark-listener-group-shared`, and call `tryOrStopSparkContext`.

```
25/08/03 19:08:06 ERROR Utils: uncaught error in thread spark-listener-group-shared, stopping SparkContext
java.lang.OutOfMemoryError: GC overhead limit exceeded
25/08/03 19:08:06 INFO OperationAuditLogger: operation=a7f134b9-373b-402d-a82b-2d42df568807 opType=ExecuteStatement state=INITIALIZED   user=b_hrvst    session=6a90d01c-7627-4ae6-a506-7ba826355489
...
25/08/03 19:08:23 INFO SparkSQLSessionManager: Opening session for b_hrvst10.147.254.115
25/08/03 19:08:23 ERROR SparkTBinaryFrontendService: Error opening session:
org.apache.kyuubi.KyuubiSQLException: Cannot call methods on a stopped SparkContext.
This stopped SparkContext was created at:
org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:951)
org.apache.kyuubi.engine.spark.SparkSQLEngine$.createSpark(SparkSQLEngine.scala:337)
org.apache.kyuubi.engine.spark.SparkSQLEngine$.main(SparkSQLEngine.scala:415)
org.apache.kyuubi.engine.spark.SparkSQLEngine.main(SparkSQLEngine.scala)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:732)
The currently active SparkContext was created at:
org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:951)
org.apache.kyuubi.engine.spark.SparkSQLEngine$.createSpark(SparkSQLEngine.scala:337)
org.apache.kyuubi.engine.spark.SparkSQLEngine$.main(SparkSQLEngine.scala:415)
org.apache.kyuubi.engine.spark.SparkSQLEngine.main(SparkSQLEngine.scala)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:732)

    at org.apache.kyuubi.KyuubiSQLException$.apply(KyuubiSQLException.scala:69)
    at org.apache.kyuubi.KyuubiSQLException$.apply(KyuubiSQLException.scala:73)
```

2. The kyuubi engine stop after 12 hours.
```
25/08/04 07:13:25 ERROR ZookeeperDiscoveryClient: Zookeeper client connection state changed to: LOST, but failed to reconnect in 3 seconds. Give up retry and stop gracefully .
25/08/04 07:13:25 INFO ClientCnxn: Session establishment complete on server zeus-slc-zk-3.vip.hadoop.ebay.com/10.147.141.240:2181, sessionid = 0x3939e22c983032e, negotiated timeout = 40000
25/08/04 07:13:25 INFO ConnectionStateManager: State change: RECONNECTED
25/08/04 07:13:25 INFO ZookeeperDiscoveryClient: Zookeeper client connection state changed to: RECONNECTED
25/08/04 07:13:25 INFO SparkSQLEngine: Service: [SparkTBinaryFrontend] is stopping.
25/08/04 07:13:25 INFO SparkTBinaryFrontendService: Service: [EngineServiceDiscovery] is stopping.
25/08/04 07:13:25 WARN EngineServiceDiscovery: The Zookeeper ensemble is LOST
25/08/04 07:13:25 INFO EngineServiceDiscovery: Service[EngineServiceDiscovery] is stopped.
25/08/04 07:13:25 INFO SparkTBinaryFrontendService: Service[SparkTBinaryFrontend] is stopped.
25/08/04 07:13:25 INFO SparkTBinaryFrontendService: SparkTBinaryFrontend has stopped
25/08/04 07:13:25 INFO SparkSQLEngine: Service: [SparkSQLBackendService] is stopping.
25/08/04 07:13:25 INFO SparkSQLBackendService: Service: [SparkSQLSessionManager] is stopping.
25/08/04 07:13:25 INFO SparkSQLSessionManager: Service: [SparkSQLOperationManager] is stopping.
25/08/04 07:13:45 INFO SparkSQLOperationManager: Service[SparkSQLOperationManager] is stopped.
25/08/04 07:13:45 INFO SparkSQLSessionManager: Service[SparkSQLSessionManager] is stopped.
```

3. seem the shutdown hook does not work in such case
9a0c49e791/externals/kyuubi-spark-sql-engine/src/main/scala/org/apache/kyuubi/engine/spark/SparkSQLEngine.scala (L375-L376)

4. and `SparkSQLEngineListener` did not receive `ApplicationEnd` message, maybe due to `spark-listener-group-shared` OOM? I do not have jstack for that, and can not check whether the thread alive.
9a0c49e791/externals/kyuubi-spark-sql-engine/src/main/scala/org/apache/spark/kyuubi/SparkSQLEngineListener.scala (L55-L63)

### How was this patch tested?

Existing GA.

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes #7167 from turboFei/check_spark_stopped.

Closes #7163

835cb3dec [Wang, Fei] SparkContext
cd542decb [Wang, Fei] Revert "no hard code"
cf9e40ef6 [Wang, Fei] no hard code
ca551c23d [Wang, Fei] check engine context stopped

Authored-by: Wang, Fei <fwang12@ebay.com>
Signed-off-by: Wang, Fei <fwang12@ebay.com>
2025-08-07 01:27:55 -07:00
Cheng Pan
2c64e4e5fc
[KYUUBI #7165] Remove banned action-shellcheck
### Why are the changes needed?

The plugin is banned by ASF.

> ludeeus/action-shellcheck1.1.0 is not allowed to be used in apache/kyuubi.

https://github.com/apache/kyuubi/actions/runs/16745477309

### How was this patch tested?

Pass GHA.

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes #7165 from pan3793/rm-shellcheck.

Closes #7165

dfda2314c [Cheng Pan] Remove banned action-shellcheck

Authored-by: Cheng Pan <chengpan@apache.org>
Signed-off-by: Cheng Pan <chengpan@apache.org>
2025-08-07 15:05:00 +08:00
dependabot[bot]
87f3f90120
[KYUUBI #7157] Bump form-data from 4.0.0 to 4.0.4 in /kyuubi-server/web-ui
Bumps [form-data](https://github.com/form-data/form-data) from 4.0.0 to 4.0.4.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/form-data/form-data/releases">form-data's releases</a>.</em></p>
<blockquote>
<h2>v4.0.1</h2>
<h3>Fixes</h3>
<ul>
<li>npmignore temporary build files (<a href="https://redirect.github.com/form-data/form-data/issues/532">#532</a>)</li>
<li>move util.isArray to Array.isArray (<a href="https://redirect.github.com/form-data/form-data/issues/564">#564</a>)</li>
</ul>
<h3>Tests</h3>
<ul>
<li>migrate from travis to GHA</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/form-data/form-data/blob/master/CHANGELOG.md">form-data's changelog</a>.</em></p>
<blockquote>
<h2><a href="https://github.com/form-data/form-data/compare/v4.0.3...v4.0.4">v4.0.4</a> - 2025-07-16</h2>
<h3>Commits</h3>
<ul>
<li>[meta] add <code>auto-changelog</code> <a href="811f68282f"><code>811f682</code></a></li>
<li>[Tests] handle predict-v8-randomness failures in node &lt; 17 and node &gt; 23 <a href="1d11a76434"><code>1d11a76</code></a></li>
<li>[Fix] Switch to using <code>crypto</code> random for boundary values <a href="3d1723080e"><code>3d17230</code></a></li>
<li>[Tests] fix linting errors <a href="5e340800b5"><code>5e34080</code></a></li>
<li>[meta] actually ensure the readme backup isn’t published <a href="316c82ba93"><code>316c82b</code></a></li>
<li>[Dev Deps] update <code>ljharb/eslint-config</code> <a href="58c25d7640"><code>58c25d7</code></a></li>
<li>[meta] fix readme capitalization <a href="2300ca1959"><code>2300ca1</code></a></li>
</ul>
<h2><a href="https://github.com/form-data/form-data/compare/v4.0.2...v4.0.3">v4.0.3</a> - 2025-06-05</h2>
<h3>Fixed</h3>
<ul>
<li>[Fix] <code>append</code>: avoid a crash on nullish values <a href="https://redirect.github.com/form-data/form-data/issues/577"><code>[#577](https://github.com/form-data/form-data/issues/577)</code></a></li>
</ul>
<h3>Commits</h3>
<ul>
<li>[eslint] use a shared config <a href="426ba9ac44"><code>426ba9a</code></a></li>
<li>[eslint] fix some spacing issues <a href="20941917f0"><code>2094191</code></a></li>
<li>[Refactor] use <code>hasown</code> <a href="81ab41b46f"><code>81ab41b</code></a></li>
<li>[Fix] validate boundary type in <code>setBoundary()</code> method <a href="8d8e469309"><code>8d8e469</code></a></li>
<li>[Tests] add tests to check the behavior of <code>getBoundary</code> with non-strings <a href="837b8a1f75"><code>837b8a1</code></a></li>
<li>[Dev Deps] remove unused deps <a href="870e4e6659"><code>870e4e6</code></a></li>
<li>[meta] remove local commit hooks <a href="e6e83ccb54"><code>e6e83cc</code></a></li>
<li>[Dev Deps] update <code>eslint</code> <a href="4066fd6f65"><code>4066fd6</code></a></li>
<li>[meta] fix scripts to use prepublishOnly <a href="c4bbb13c0e"><code>c4bbb13</code></a></li>
</ul>
<h2><a href="https://github.com/form-data/form-data/compare/v4.0.1...v4.0.2">v4.0.2</a> - 2025-02-14</h2>
<h3>Merged</h3>
<ul>
<li>[Fix] set <code>Symbol.toStringTag</code> when available <a href="https://redirect.github.com/form-data/form-data/pull/573"><code>[#573](https://github.com/form-data/form-data/issues/573)</code></a></li>
<li>[Fix] set <code>Symbol.toStringTag</code> when available <a href="https://redirect.github.com/form-data/form-data/pull/573"><code>[#573](https://github.com/form-data/form-data/issues/573)</code></a></li>
<li>fix (npmignore): ignore temporary build files <a href="https://redirect.github.com/form-data/form-data/pull/532"><code>[#532](https://github.com/form-data/form-data/issues/532)</code></a></li>
<li>fix (npmignore): ignore temporary build files <a href="https://redirect.github.com/form-data/form-data/pull/532"><code>[#532](https://github.com/form-data/form-data/issues/532)</code></a></li>
</ul>
<h3>Fixed</h3>
<ul>
<li>[Fix] set <code>Symbol.toStringTag</code> when available (<a href="https://redirect.github.com/form-data/form-data/issues/573">#573</a>) <a href="https://redirect.github.com/form-data/form-data/issues/396"><code>[#396](https://github.com/form-data/form-data/issues/396)</code></a></li>
<li>[Fix] set <code>Symbol.toStringTag</code> when available (<a href="https://redirect.github.com/form-data/form-data/issues/573">#573</a>) <a href="https://redirect.github.com/form-data/form-data/issues/396"><code>[#396](https://github.com/form-data/form-data/issues/396)</code></a></li>
<li>[Fix] set <code>Symbol.toStringTag</code> when available <a href="https://redirect.github.com/form-data/form-data/issues/396"><code>[#396](https://github.com/form-data/form-data/issues/396)</code></a></li>
</ul>
<h3>Commits</h3>
<ul>
<li>Merge tags v2.5.3 and v3.0.3 <a href="92613b9208"><code>92613b9</code></a></li>
<li>[Tests] migrate from travis to GHA <a href="806eda7774"><code>806eda7</code></a></li>
<li>[Tests] migrate from travis to GHA <a href="8fdb3bc6b5"><code>8fdb3bc</code></a></li>
</ul>
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="41996f5ac7"><code>41996f5</code></a> v4.0.4</li>
<li><a href="316c82ba93"><code>316c82b</code></a> [meta] actually ensure the readme backup isn’t published</li>
<li><a href="2300ca1959"><code>2300ca1</code></a> [meta] fix readme capitalization</li>
<li><a href="811f68282f"><code>811f682</code></a> [meta] add <code>auto-changelog</code></li>
<li><a href="5e340800b5"><code>5e34080</code></a> [Tests] fix linting errors</li>
<li><a href="1d11a76434"><code>1d11a76</code></a> [Tests] handle predict-v8-randomness failures in node &lt; 17 and node &gt; 23</li>
<li><a href="58c25d7640"><code>58c25d7</code></a> [Dev Deps] update <code>ljharb/eslint-config</code></li>
<li><a href="3d1723080e"><code>3d17230</code></a> [Fix] Switch to using <code>crypto</code> random for boundary values</li>
<li><a href="d8d67dc8ac"><code>d8d67dc</code></a> v4.0.3</li>
<li><a href="e6e83ccb54"><code>e6e83cc</code></a> [meta] remove local commit hooks</li>
<li>Additional commits viewable in <a href="https://github.com/form-data/form-data/compare/v4.0.0...v4.0.4">compare view</a></li>
</ul>
</details>
<details>
<summary>Maintainer changes</summary>
<p>This version was pushed to npm by <a href="https://www.npmjs.com/~ljharb">ljharb</a>, a new releaser for form-data since your current version.</p>
</details>
<br />

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=form-data&package-manager=npm_and_yarn&previous-version=4.0.0&new-version=4.0.4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `dependabot rebase` will rebase this PR
- `dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `dependabot merge` will merge this PR after your CI passes on it
- `dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `dependabot cancel merge` will cancel a previously requested merge and block automerging
- `dependabot reopen` will reopen this PR if it is closed
- `dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/apache/kyuubi/network/alerts).

</details>

Closes #7157 from dependabot[bot]/dependabot/npm_and_yarn/kyuubi-server/web-ui/form-data-4.0.4.

Closes #7157

4d754d973 [dependabot[bot]] Bump form-data from 4.0.0 to 4.0.4 in /kyuubi-server/web-ui

Authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Signed-off-by: Cheng Pan <chengpan@apache.org>
2025-08-07 15:04:00 +08:00
Cheng Pan
dc9c75b8e6
[KYUUBI #7162] Bump kafka-clients 3.9.1
### Why are the changes needed?

https://kafka.apache.org/cve-list.html

### How was this patch tested?

Pass GHA.

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes #7162 from pan3793/kafka-3.9.1.

Closes #7162

108e5690b [Cheng Pan] Bump kafka-clients 3.9.1

Authored-by: Cheng Pan <chengpan@apache.org>
Signed-off-by: Cheng Pan <chengpan@apache.org>
2025-08-05 16:58:59 +08:00
liangzhaoyuan
9a0c49e791
[KYUUBI #7138] Respect kyuubi.session.engine.spark.initialize.sql set by cllient in shared engine mode
### Why are the changes needed?
<img width="1860" height="908" alt="image" src="https://github.com/user-attachments/assets/ec445237-be62-405f-992e-56e10156407f" />
**Current Behavior:**

When "kyuubi.engine.share.level = USER/GROUP/SERVER", the first client (Client A) calling openSession creates a Kyuubi-Spark-SQL-Engine (Spark Driver), where the initialization SQL configured in "kyuubi.session.engine.spark.initialize.sql" takes effect.

Subsequent clients (e.g., Client B) connecting via openSession will reuse the existing Kyuubi-Spark-SQL-Engine (Spark Driver) created in step 1, where the initialization SQL configured in "kyuubi.session.engine.spark.initialize.sql" becomes ineffective.

**Why This Capability Is Needed:**

Currently, kyuubi.session.engine.spark.initialize.sql only applies to the first openSession client. All subsequent SQL operations inherit the initialization SQL configuration from the first client (this appears to be a potential bug).

Client A may need to set "USE dbA" in its current SQL context, while Client B may need "USE dbB" in its own context - such scenarios should be supported.

### How was this patch tested?
Tested on local Kyuubi/Spark cluster. No existing unit tests cover this scenario. Please point me to any relevant tests so I can add them

### Was this patch authored or co-authored using generative AI tooling?
No

Closes #7138 from 1358035421/lc/spark_session_init_sql.

Closes #7138

338d8aace [Cheng Pan] remove dash
1beecc456 [Cheng Pan] fix
6c7f9a13e [liangzhaoyuan] update migration-guide.md
492adb6c4 [liangzhaoyuan] fix review comments
f0e9320be [1358035421] Merge branch 'master' into lc/spark_session_init_sql
021455322 [liangzhaoyuan] update migration-guide.md
b4e61cf89 [liangzhaoyuan] ut
ca4c71253 [Cheng Pan] Update externals/kyuubi-spark-sql-engine/src/main/scala/org/apache/kyuubi/engine/spark/session/SparkSQLSessionManager.scala
da92544f1 [liangzhaoyuan] fix
c1a38d584 [liangzhaoyuan] Support executing kyuubi.session.engine.spark.initialize.sql on session initialization

Lead-authored-by: liangzhaoyuan <lwlzyl19940916@gmail.com>
Co-authored-by: Cheng Pan <chengpan@apache.org>
Co-authored-by: 1358035421 <13588035421@163.com>
Co-authored-by: Cheng Pan <pan3793@gmail.com>
Signed-off-by: Cheng Pan <chengpan@apache.org>
2025-08-04 14:46:05 +08:00
liangzhaoyuan
9a50bfa814
[KYUUBI #7158] Spark engine respects session-level idle timeout threshold
### Why are the changes needed?
Fixes the same class of issue as
https://github.com/apache/kyuubi/pull/7138

Previously, `sessionIdleTimeoutThreshold` was initialized only once during session creation using `sessionManager.getConf`, preventing dynamic updates when clients pass new configurations during connection.

we now:
- Allow clients to set session-specific kyuubi.session.idle.timeout` during connection
- Dynamically adjust idle timeout per session
- Prevent connection pile-up by timely recycling idle sessions

Closes #7158 from 1358035421/lc/sessio_idle_timeout_threshold.

Closes #7158

abe513eed [liangzhaoyuan] fix review comments
3face844a [liangzhaoyuan] Use per-session idle timeout threshold instead of global sessionManager's value

Authored-by: liangzhaoyuan <lwlzyl19940916@gmail.com>
Signed-off-by: Cheng Pan <chengpan@apache.org>
2025-08-04 11:28:10 +08:00
17 changed files with 860 additions and 129 deletions

View File

@ -132,9 +132,3 @@ jobs:
with:
name: super-linter-log
path: super-linter.log
- name: check bin directory
uses: ludeeus/action-shellcheck@1.1.0
with:
# TODO: enable for all folders step by step
scandir: './bin'
severity: error

View File

@ -97,7 +97,7 @@ jetty-util-ajax/9.4.57.v20241219//jetty-util-ajax-9.4.57.v20241219.jar
jetty-util/9.4.57.v20241219//jetty-util-9.4.57.v20241219.jar
jline/2.14.6//jline-2.14.6.jar
jul-to-slf4j/1.7.36//jul-to-slf4j-1.7.36.jar
kafka-clients/3.5.2//kafka-clients-3.5.2.jar
kafka-clients/3.9.1//kafka-clients-3.9.1.jar
kubernetes-client-api/6.13.5//kubernetes-client-api-6.13.5.jar
kubernetes-client/6.13.5//kubernetes-client-6.13.5.jar
kubernetes-httpclient-okhttp/6.13.5//kubernetes-httpclient-okhttp-6.13.5.jar

View File

@ -23,6 +23,8 @@
* Since Kyuubi 1.11, if the engine is running in cluster mode, Kyuubi will respect the `kyuubi.session.engine.startup.waitCompletion` config to determine whether to wait for the engine completion or not. If the engine is running in client mode, Kyuubi will always wait for the engine completion. And for Spark engine, Kyuubi will append the `spark.yarn.submit.waitAppCompletion` and `spark.kubernetes.submission.waitAppCompletion` configs to the engine conf based on the value of `kyuubi.session.engine.startup.waitCompletion`.
* Since Kyuubi 1.11, the configuration `kyuubi.session.engine.spark.initialize.sql` set by the client (via session configuration) is now correctly applied to every session in shared engines (USER, GROUP, SERVER). Previously, only the value set on the server side was applied and only for the first session when the engine started. Now, session-level settings provided by each client are respected.
## Upgrading from Kyuubi 1.9 to 1.10
* Since Kyuubi 1.10, `beeline` is deprecated and will be removed in the future, please use `kyuubi-beeline` instead.

View File

@ -391,6 +391,12 @@
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.kyuubi</groupId>
<artifactId>kyuubi-spark-lineage_${scala.binary.version}</artifactId>
<version>${project.version}</version>
<scope>test</scope>
</dependency>
</dependencies>
<build>

View File

@ -37,6 +37,8 @@ import org.scalatest.BeforeAndAfterAll
import org.scalatest.funsuite.AnyFunSuite
import org.apache.kyuubi.Utils
import org.apache.kyuubi.plugin.lineage.Lineage
import org.apache.kyuubi.plugin.lineage.helper.SparkSQLLineageParseHelper
import org.apache.kyuubi.plugin.spark.authz.{AccessControlException, SparkSessionProvider}
import org.apache.kyuubi.plugin.spark.authz.MysqlContainerEnv
import org.apache.kyuubi.plugin.spark.authz.RangerTestNamespace._
@ -1513,4 +1515,31 @@ class HiveCatalogRangerSparkExtensionSuite extends RangerSparkExtensionSuite {
}
}
}
test("Test view lineage") {
def extractLineage(sql: String): Lineage = {
val parsed = spark.sessionState.sqlParser.parsePlan(sql)
val qe = spark.sessionState.executePlan(parsed)
val analyzed = qe.analyzed
SparkSQLLineageParseHelper(spark).transformToLineage(0, analyzed).get
}
val db1 = defaultDb
val table1 = "table1"
val view1 = "view1"
withSingleCallEnabled {
withCleanTmpResources(Seq((s"$db1.$table1", "table"), (s"$db1.$view1", "view"))) {
doAs(admin, sql(s"CREATE TABLE IF NOT EXISTS $db1.$table1 (id int, scope int)"))
doAs(admin, sql(s"CREATE VIEW $db1.$view1 AS SELECT * FROM $db1.$table1"))
val lineage = doAs(
admin,
extractLineage(s"SELECT id FROM $db1.$view1 WHERE id > 1"))
assert(lineage.inputTables.size == 1)
assert(lineage.inputTables.head === s"spark_catalog.$db1.$table1")
assert(lineage.columnLineage.size == 1)
assert(lineage.columnLineage.head.originalColumns.head === s"spark_catalog.$db1.$table1.id")
}
}
}
}

View File

@ -26,7 +26,7 @@
## Build
```shell
build/mvn clean package -DskipTests -pl :kyuubi-spark-lineage_2.12 -am -Dspark.version=3.2.1
build/mvn clean package -DskipTests -pl :kyuubi-spark-lineage_2.12 -am -Dspark.version=3.5.1
```
### Supported Apache Spark Versions
@ -37,6 +37,4 @@ build/mvn clean package -DskipTests -pl :kyuubi-spark-lineage_2.12 -am -Dspark.v
- [x] 3.5.x (default)
- [x] 3.4.x
- [x] 3.3.x
- [x] 3.2.x
- [x] 3.1.x

View File

@ -46,6 +46,9 @@ trait LineageParser {
val SUBQUERY_COLUMN_IDENTIFIER = "__subquery__"
val AGGREGATE_COUNT_COLUMN_IDENTIFIER = "__count__"
val LOCAL_TABLE_IDENTIFIER = "__local__"
val METADATA_COL_ATTR_KEY = "__metadata_col"
val ORIGINAL_ROW_ID_VALUE_PREFIX: String = "__original_row_id_"
val OPERATION_COLUMN: String = "__row_operation"
type AttributeMap[A] = ListMap[Attribute, A]
@ -307,7 +310,37 @@ trait LineageParser {
extractColumnsLineage(getQuery(plan), parentColumnsLineage).map { case (k, v) =>
k.withName(s"$table.${k.name}") -> v
}
case p if p.nodeName == "MergeRows" =>
val instructionsOutputs =
getField[Seq[Expression]](p, "matchedInstructions")
.map(extractInstructionOutputs) ++
getField[Seq[Expression]](p, "notMatchedInstructions")
.map(extractInstructionOutputs) ++
getField[Seq[Expression]](p, "notMatchedBySourceInstructions")
.map(extractInstructionOutputs)
val nextColumnsLineage = ListMap(p.output.indices.map { index =>
val keyAttr = p.output(index)
val instructionOutputs = instructionsOutputs.map(_(index))
(keyAttr, instructionOutputs)
}.collect {
case (keyAttr: Attribute, instructionsOutput)
if instructionsOutput
.exists(_.references.nonEmpty) =>
val attributeSet = AttributeSet.apply(instructionsOutput)
keyAttr -> attributeSet
}: _*)
p.children.map(
extractColumnsLineage(_, nextColumnsLineage)).reduce(mergeColumnsLineage)
case p if p.nodeName == "WriteDelta" || p.nodeName == "ReplaceData" =>
val table = getV2TableName(getField[NamedRelation](plan, "table"))
val query = getQuery(plan)
val columnsLineage = extractColumnsLineage(query, parentColumnsLineage)
columnsLineage
.filter { case (k, _) => !isMetadataAttr(k) }
.map { case (k, v) =>
k.withName(s"$table.${k.name}") -> v
}
case p if p.nodeName == "MergeIntoTable" =>
val matchedActions = getField[Seq[MergeAction]](plan, "matchedActions")
val notMatchedActions = getField[Seq[MergeAction]](plan, "notMatchedActions")
@ -448,6 +481,13 @@ trait LineageParser {
})
}
// PermanentViewMarker is introduced by kyuubi authz plugin, which is a wrapper of View,
// so we just extract the columns lineage from its inner children (original view)
case pvm if pvm.nodeName == "PermanentViewMarker" =>
pvm.innerChildren.asInstanceOf[Seq[LogicalPlan]]
.map(extractColumnsLineage(_, parentColumnsLineage))
.reduce(mergeColumnsLineage)
case p: View =>
if (!p.isTempView && SparkContextHelper.getConf(
LineageConf.SKIP_PARSING_PERMANENT_VIEW_ENABLED)) {
@ -507,6 +547,19 @@ trait LineageParser {
case _ => qualifiedName
}
}
private def isMetadataAttr(attr: Attribute): Boolean = {
attr.metadata.contains(METADATA_COL_ATTR_KEY) ||
attr.name.startsWith(ORIGINAL_ROW_ID_VALUE_PREFIX) ||
attr.name.startsWith(OPERATION_COLUMN)
}
private def extractInstructionOutputs(instruction: Expression): Seq[Expression] = {
instruction match {
case p if p.nodeName == "Split" => getField[Seq[Expression]](p, "otherOutput")
case p => getField[Seq[Expression]](p, "output")
}
}
}
case class SparkSQLLineageParseHelper(sparkSession: SparkSession) extends LineageParser

View File

@ -0,0 +1,221 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.kyuubi.plugin.lineage.helper
import org.apache.kyuubi.plugin.lineage.Lineage
import org.apache.kyuubi.plugin.lineage.helper.SparkListenerHelper.SPARK_RUNTIME_VERSION
class RowLevelCatalogLineageParserSuite extends SparkSQLLineageParserHelperSuite {
override def catalogName: String = {
"org.apache.spark.sql.connector.catalog.InMemoryRowLevelOperationTableCatalog"
}
test("columns lineage extract - WriteDelta") {
assume(
SPARK_RUNTIME_VERSION >= "3.5",
"WriteDelta is only supported in SPARK_RUNTIME_VERSION >= 3.5")
val ddls =
"""
|create table v2_catalog.db.target_t(pk int not null, name string, price float)
| TBLPROPERTIES ('supports-deltas'='true');
|create table v2_catalog.db.source_t(pk int not null, name string, price float)
| TBLPROPERTIES ('supports-deltas'='true');
|create table v2_catalog.db.pivot_t(pk int not null, price float)
| TBLPROPERTIES ('supports-deltas'='true')
|""".stripMargin
ddls.split(";").filter(_.nonEmpty).foreach(spark.sql(_).collect())
withTable("v2_catalog.db.target_t", "v2_catalog.db.source_t", "v2_catalog.db.pivot_t") { _ =>
val ret0 = extractLineageWithoutExecuting(
"MERGE INTO v2_catalog.db.target_t AS target " +
"USING v2_catalog.db.source_t AS source " +
"ON target.pk = source.pk " +
"WHEN MATCHED THEN " +
" UPDATE SET target.name = source.name, target.price = source.price " +
"WHEN NOT MATCHED THEN " +
" INSERT (pk, name, price) VALUES (cast(source.pk as int), source.name, source.price)" +
"WHEN NOT MATCHED BY SOURCE THEN UPDATE SET target.name = 'abc' ")
assert(ret0 == Lineage(
List("v2_catalog.db.source_t", "v2_catalog.db.target_t"),
List("v2_catalog.db.target_t"),
List(
(
"v2_catalog.db.target_t.pk",
Set("v2_catalog.db.source_t.pk", "v2_catalog.db.target_t.pk")),
("v2_catalog.db.target_t.name", Set("v2_catalog.db.source_t.name")),
(
"v2_catalog.db.target_t.price",
Set("v2_catalog.db.source_t.price", "v2_catalog.db.target_t.price")))))
val ret1 = extractLineageWithoutExecuting(
"MERGE INTO v2_catalog.db.target_t AS target " +
"USING v2_catalog.db.source_t AS source " +
"ON target.pk = source.pk " +
"WHEN MATCHED THEN " +
" UPDATE SET * " +
"WHEN NOT MATCHED THEN " +
" INSERT *")
assert(ret1 == Lineage(
List("v2_catalog.db.source_t"),
List("v2_catalog.db.target_t"),
List(
("v2_catalog.db.target_t.pk", Set("v2_catalog.db.source_t.pk")),
("v2_catalog.db.target_t.name", Set("v2_catalog.db.source_t.name")),
("v2_catalog.db.target_t.price", Set("v2_catalog.db.source_t.price")))))
val ret2 = extractLineageWithoutExecuting(
"MERGE INTO v2_catalog.db.target_t AS target " +
"USING (select a.pk, a.name, b.price " +
"from v2_catalog.db.source_t a join " +
"v2_catalog.db.pivot_t b) AS source " +
"ON target.pk = source.pk " +
"WHEN MATCHED THEN " +
" UPDATE SET * " +
"WHEN NOT MATCHED THEN " +
" INSERT *")
assert(ret2 == Lineage(
List("v2_catalog.db.source_t", "v2_catalog.db.pivot_t"),
List("v2_catalog.db.target_t"),
List(
("v2_catalog.db.target_t.pk", Set("v2_catalog.db.source_t.pk")),
("v2_catalog.db.target_t.name", Set("v2_catalog.db.source_t.name")),
("v2_catalog.db.target_t.price", Set("v2_catalog.db.pivot_t.price")))))
val ret3 = extractLineageWithoutExecuting(
"update v2_catalog.db.target_t AS set name='abc' where price < 10 ")
assert(ret3 == Lineage(
List("v2_catalog.db.target_t"),
List("v2_catalog.db.target_t"),
List(
("v2_catalog.db.target_t.pk", Set("v2_catalog.db.target_t.pk")),
("v2_catalog.db.target_t.name", Set()),
("v2_catalog.db.target_t.price", Set("v2_catalog.db.target_t.price")))))
}
}
test("columns lineage extract - ReplaceData") {
assume(
SPARK_RUNTIME_VERSION >= "3.5",
"ReplaceData[SPARK-43963] for merge into is supported in SPARK_RUNTIME_VERSION >= 3.5")
val ddls =
"""
|create table v2_catalog.db.target_t(id int, name string, price float)
|create table v2_catalog.db.source_t(id int, name string, price float)
|create table v2_catalog.db.pivot_t(id int, price float)
|""".stripMargin
ddls.split("\n").filter(_.nonEmpty).foreach(spark.sql(_).collect())
withTable("v2_catalog.db.target_t", "v2_catalog.db.source_t", "v2_catalog.db.pivot_t") { _ =>
val ret0 = extractLineageWithoutExecuting("MERGE INTO v2_catalog.db.target_t AS target " +
"USING v2_catalog.db.source_t AS source " +
"ON target.id = source.id " +
"WHEN MATCHED THEN " +
" UPDATE SET target.name = source.name, target.price = source.price " +
"WHEN NOT MATCHED THEN " +
" INSERT (id, name, price) VALUES (cast(source.id as int), source.name, source.price)")
/**
* The ReplaceData operation requires that target records which are read but do not match
* any of the MATCHED or NOT MATCHED BY SOURCE clauses also be copied.
* (refer to [[RewriteMergeIntoTable#buildReplaceDataMergeRowsPlan]])
*/
assert(ret0 == Lineage(
List("v2_catalog.db.source_t", "v2_catalog.db.target_t"),
List("v2_catalog.db.target_t"),
List(
(
"v2_catalog.db.target_t.id",
Set("v2_catalog.db.source_t.id", "v2_catalog.db.target_t.id")),
(
"v2_catalog.db.target_t.name",
Set("v2_catalog.db.source_t.name", "v2_catalog.db.target_t.name")),
(
"v2_catalog.db.target_t.price",
Set("v2_catalog.db.source_t.price", "v2_catalog.db.target_t.price")))))
val ret1 = extractLineageWithoutExecuting("MERGE INTO v2_catalog.db.target_t AS target " +
"USING v2_catalog.db.source_t AS source " +
"ON target.id = source.id " +
"WHEN MATCHED THEN " +
" UPDATE SET * " +
"WHEN NOT MATCHED THEN " +
" INSERT *")
assert(ret1 == Lineage(
List("v2_catalog.db.source_t", "v2_catalog.db.target_t"),
List("v2_catalog.db.target_t"),
List(
(
"v2_catalog.db.target_t.id",
Set("v2_catalog.db.source_t.id", "v2_catalog.db.target_t.id")),
(
"v2_catalog.db.target_t.name",
Set("v2_catalog.db.source_t.name", "v2_catalog.db.target_t.name")),
(
"v2_catalog.db.target_t.price",
Set("v2_catalog.db.source_t.price", "v2_catalog.db.target_t.price")))))
val ret2 = extractLineageWithoutExecuting("MERGE INTO v2_catalog.db.target_t AS target " +
"USING (select a.id, a.name, b.price " +
"from v2_catalog.db.source_t a join v2_catalog.db.pivot_t b) AS source " +
"ON target.id = source.id " +
"WHEN MATCHED THEN " +
" UPDATE SET * " +
"WHEN NOT MATCHED THEN " +
" INSERT *")
assert(ret2 == Lineage(
List("v2_catalog.db.source_t", "v2_catalog.db.target_t", "v2_catalog.db.pivot_t"),
List("v2_catalog.db.target_t"),
List(
(
"v2_catalog.db.target_t.id",
Set("v2_catalog.db.source_t.id", "v2_catalog.db.target_t.id")),
(
"v2_catalog.db.target_t.name",
Set("v2_catalog.db.source_t.name", "v2_catalog.db.target_t.name")),
(
"v2_catalog.db.target_t.price",
Set("v2_catalog.db.pivot_t.price", "v2_catalog.db.target_t.price")))))
val ret3 = extractLineageWithoutExecuting(
"update v2_catalog.db.target_t AS set name='abc' where price < 10 ")
// For tables that do not support row-level deletion,
// duplicate data of the same group may be included when writing.
// plan is:
// ReplaceData
// +- Project [if ((price#1160 < cast(10 as float))) id#1158 else id#1158 AS id#1163,
// if ((price#1160 < cast(10 as float))) abc else name#1159 AS name#1164,
// if ((price#1160 < cast(10 as float))) price#1160 else price#1160 AS price#1165,
// _partition#1162]
// +- RelationV2[id#1158, name#1159, price#1160, _partition#1162]
// v2_catalog.db.target_t v2_catalog.db.target_t
assert(ret3 == Lineage(
List("v2_catalog.db.target_t"),
List("v2_catalog.db.target_t"),
List(
(
"v2_catalog.db.target_t.id",
Set("v2_catalog.db.target_t.price", "v2_catalog.db.target_t.id")),
(
"v2_catalog.db.target_t.name",
Set("v2_catalog.db.target_t.price", "v2_catalog.db.target_t.name")),
("v2_catalog.db.target_t.price", Set("v2_catalog.db.target_t.price")))))
}
}
}

View File

@ -31,12 +31,13 @@ import org.apache.kyuubi.KyuubiFunSuite
import org.apache.kyuubi.plugin.lineage.Lineage
import org.apache.kyuubi.plugin.lineage.helper.SparkListenerHelper.SPARK_RUNTIME_VERSION
class SparkSQLLineageParserHelperSuite extends KyuubiFunSuite
abstract class SparkSQLLineageParserHelperSuite extends KyuubiFunSuite
with SparkListenerExtensionTest {
val catalogName =
if (SPARK_RUNTIME_VERSION <= "3.1") "org.apache.spark.sql.connector.InMemoryTableCatalog"
def catalogName: String = {
if (SPARK_RUNTIME_VERSION <= "3.3") "org.apache.spark.sql.connector.InMemoryTableCatalog"
else "org.apache.spark.sql.connector.catalog.InMemoryTableCatalog"
}
val DEFAULT_CATALOG = LineageConf.DEFAULT_CATALOG
override protected val catalogImpl: String = "hive"
@ -169,65 +170,6 @@ class SparkSQLLineageParserHelperSuite extends KyuubiFunSuite
}
}
test("columns lineage extract - MergeIntoTable") {
val ddls =
"""
|create table v2_catalog.db.target_t(id int, name string, price float)
|create table v2_catalog.db.source_t(id int, name string, price float)
|create table v2_catalog.db.pivot_t(id int, price float)
|""".stripMargin
ddls.split("\n").filter(_.nonEmpty).foreach(spark.sql(_).collect())
withTable("v2_catalog.db.target_t", "v2_catalog.db.source_t") { _ =>
val ret0 = extractLineageWithoutExecuting("MERGE INTO v2_catalog.db.target_t AS target " +
"USING v2_catalog.db.source_t AS source " +
"ON target.id = source.id " +
"WHEN MATCHED THEN " +
" UPDATE SET target.name = source.name, target.price = source.price " +
"WHEN NOT MATCHED THEN " +
" INSERT (id, name, price) VALUES (cast(source.id as int), source.name, source.price)")
assert(ret0 == Lineage(
List("v2_catalog.db.source_t"),
List("v2_catalog.db.target_t"),
List(
("v2_catalog.db.target_t.id", Set("v2_catalog.db.source_t.id")),
("v2_catalog.db.target_t.name", Set("v2_catalog.db.source_t.name")),
("v2_catalog.db.target_t.price", Set("v2_catalog.db.source_t.price")))))
val ret1 = extractLineageWithoutExecuting("MERGE INTO v2_catalog.db.target_t AS target " +
"USING v2_catalog.db.source_t AS source " +
"ON target.id = source.id " +
"WHEN MATCHED THEN " +
" UPDATE SET * " +
"WHEN NOT MATCHED THEN " +
" INSERT *")
assert(ret1 == Lineage(
List("v2_catalog.db.source_t"),
List("v2_catalog.db.target_t"),
List(
("v2_catalog.db.target_t.id", Set("v2_catalog.db.source_t.id")),
("v2_catalog.db.target_t.name", Set("v2_catalog.db.source_t.name")),
("v2_catalog.db.target_t.price", Set("v2_catalog.db.source_t.price")))))
val ret2 = extractLineageWithoutExecuting("MERGE INTO v2_catalog.db.target_t AS target " +
"USING (select a.id, a.name, b.price " +
"from v2_catalog.db.source_t a join v2_catalog.db.pivot_t b) AS source " +
"ON target.id = source.id " +
"WHEN MATCHED THEN " +
" UPDATE SET * " +
"WHEN NOT MATCHED THEN " +
" INSERT *")
assert(ret2 == Lineage(
List("v2_catalog.db.source_t", "v2_catalog.db.pivot_t"),
List("v2_catalog.db.target_t"),
List(
("v2_catalog.db.target_t.id", Set("v2_catalog.db.source_t.id")),
("v2_catalog.db.target_t.name", Set("v2_catalog.db.source_t.name")),
("v2_catalog.db.target_t.price", Set("v2_catalog.db.pivot_t.price")))))
}
}
test("columns lineage extract - CreateViewCommand") {
withView("createviewcommand", "createviewcommand1", "createviewcommand2") { _ =>
val ret0 = extractLineage(
@ -1451,32 +1393,36 @@ class SparkSQLLineageParserHelperSuite extends KyuubiFunSuite
test("test directory to table") {
val inputFile = getClass.getResource("/").getPath + "input_file"
val sourceFile = File(inputFile).createFile()
spark.sql(
s"""
|CREATE OR REPLACE TEMPORARY VIEW temp_view (
| `a` STRING COMMENT '',
| `b` STRING COMMENT ''
|) USING csv OPTIONS(
| sep='\t',
| path='${sourceFile.path}'
|);
|""".stripMargin).collect()
withView("temp_view") { _ =>
{
spark.sql(
s"""
|CREATE OR REPLACE TEMPORARY VIEW temp_view (
| `a` STRING COMMENT '',
| `b` STRING COMMENT ''
|) USING csv OPTIONS(
| sep='\t',
| path='${sourceFile.path}'
|);
|""".stripMargin).collect()
val ret0 = extractLineageWithoutExecuting(
s"""
|INSERT OVERWRITE TABLE test_db.test_table_from_dir
|SELECT `a`, `b` FROM temp_view
|""".stripMargin)
val ret0 = extractLineageWithoutExecuting(
s"""
|INSERT OVERWRITE TABLE test_db.test_table_from_dir
|SELECT `a`, `b` FROM temp_view
|""".stripMargin)
assert(ret0 == Lineage(
List(),
List(s"spark_catalog.test_db.test_table_from_dir"),
List(
(s"spark_catalog.test_db.test_table_from_dir.a0", Set()),
(s"spark_catalog.test_db.test_table_from_dir.b0", Set()))))
assert(ret0 == Lineage(
List(),
List(s"spark_catalog.test_db.test_table_from_dir"),
List(
(s"spark_catalog.test_db.test_table_from_dir.a0", Set()),
(s"spark_catalog.test_db.test_table_from_dir.b0", Set()))))
}
}
}
private def extractLineageWithoutExecuting(sql: String): Lineage = {
protected def extractLineageWithoutExecuting(sql: String): Lineage = {
val parsed = spark.sessionState.sqlParser.parsePlan(sql)
val analyzed = spark.sessionState.analyzer.execute(parsed)
spark.sessionState.analyzer.checkAnalysis(analyzed)

View File

@ -0,0 +1,86 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.kyuubi.plugin.lineage.helper
import org.apache.kyuubi.plugin.lineage.Lineage
class TableCatalogLineageParserSuite extends SparkSQLLineageParserHelperSuite {
override def catalogName: String = {
"org.apache.spark.sql.connector.catalog.InMemoryTableCatalog"
}
test("columns lineage extract - MergeIntoTable") {
val ddls =
"""
|create table v2_catalog.db.target_t(id int, name string, price float)
|create table v2_catalog.db.source_t(id int, name string, price float)
|create table v2_catalog.db.pivot_t(id int, price float)
|""".stripMargin
ddls.split("\n").filter(_.nonEmpty).foreach(spark.sql(_).collect())
withTable("v2_catalog.db.target_t", "v2_catalog.db.source_t", "v2_catalog.db.pivot_t") { _ =>
val ret0 = extractLineageWithoutExecuting("MERGE INTO v2_catalog.db.target_t AS target " +
"USING v2_catalog.db.source_t AS source " +
"ON target.id = source.id " +
"WHEN MATCHED THEN " +
" UPDATE SET target.name = source.name, target.price = source.price " +
"WHEN NOT MATCHED THEN " +
" INSERT (id, name, price) VALUES (cast(source.id as int), source.name, source.price)")
assert(ret0 == Lineage(
List("v2_catalog.db.source_t"),
List("v2_catalog.db.target_t"),
List(
("v2_catalog.db.target_t.id", Set("v2_catalog.db.source_t.id")),
("v2_catalog.db.target_t.name", Set("v2_catalog.db.source_t.name")),
("v2_catalog.db.target_t.price", Set("v2_catalog.db.source_t.price")))))
val ret1 = extractLineageWithoutExecuting("MERGE INTO v2_catalog.db.target_t AS target " +
"USING v2_catalog.db.source_t AS source " +
"ON target.id = source.id " +
"WHEN MATCHED THEN " +
" UPDATE SET * " +
"WHEN NOT MATCHED THEN " +
" INSERT *")
assert(ret1 == Lineage(
List("v2_catalog.db.source_t"),
List("v2_catalog.db.target_t"),
List(
("v2_catalog.db.target_t.id", Set("v2_catalog.db.source_t.id")),
("v2_catalog.db.target_t.name", Set("v2_catalog.db.source_t.name")),
("v2_catalog.db.target_t.price", Set("v2_catalog.db.source_t.price")))))
val ret2 = extractLineageWithoutExecuting("MERGE INTO v2_catalog.db.target_t AS target " +
"USING (select a.id, a.name, b.price " +
"from v2_catalog.db.source_t a join v2_catalog.db.pivot_t b) AS source " +
"ON target.id = source.id " +
"WHEN MATCHED THEN " +
" UPDATE SET * " +
"WHEN NOT MATCHED THEN " +
" INSERT *")
assert(ret2 == Lineage(
List("v2_catalog.db.source_t", "v2_catalog.db.pivot_t"),
List("v2_catalog.db.target_t"),
List(
("v2_catalog.db.target_t.id", Set("v2_catalog.db.source_t.id")),
("v2_catalog.db.target_t.name", Set("v2_catalog.db.source_t.name")),
("v2_catalog.db.target_t.price", Set("v2_catalog.db.pivot_t.price")))))
}
}
}

View File

@ -105,7 +105,7 @@ class SparkSQLSessionManager private (name: String, spark: SparkSession)
userIsolatedSparkSessionThread.foreach(_.shutdown())
}
private def getOrNewSparkSession(user: String): SparkSession = {
private def getOrNewSparkSession(user: String, sessionConf: Map[String, String]): SparkSession = {
if (singleSparkSession) {
spark
} else {
@ -113,8 +113,8 @@ class SparkSQLSessionManager private (name: String, spark: SparkSession)
// it's unnecessary to create a new spark session in connection share level
// since the session is only one
case CONNECTION => spark
case USER => newSparkSession(spark)
case GROUP | SERVER if userIsolatedSparkSession => newSparkSession(spark)
case USER => newSparkSession(spark, sessionConf)
case GROUP | SERVER if userIsolatedSparkSession => newSparkSession(spark, sessionConf)
case GROUP | SERVER =>
userIsolatedCacheLock.synchronized {
if (userIsolatedCache.containsKey(user)) {
@ -123,7 +123,7 @@ class SparkSQLSessionManager private (name: String, spark: SparkSession)
userIsolatedCache.get(user)
} else {
userIsolatedCacheCount.put(user, (1, System.currentTimeMillis()))
val newSession = newSparkSession(spark)
val newSession = newSparkSession(spark, sessionConf)
userIsolatedCache.put(user, newSession)
newSession
}
@ -132,11 +132,16 @@ class SparkSQLSessionManager private (name: String, spark: SparkSession)
}
}
private def newSparkSession(rootSparkSession: SparkSession): SparkSession = {
private def newSparkSession(
rootSparkSession: SparkSession,
sessionConf: Map[String, String]): SparkSession = {
val newSparkSession = rootSparkSession.newSession()
KyuubiSparkUtil.initializeSparkSession(
newSparkSession,
conf.get(ENGINE_SESSION_SPARK_INITIALIZE_SQL))
sessionConf.get(ENGINE_SESSION_SPARK_INITIALIZE_SQL.key)
.filter(_.nonEmpty)
.map(_.split(";").toSeq)
.getOrElse(conf.get(ENGINE_SESSION_SPARK_INITIALIZE_SQL)))
newSparkSession
}
@ -150,7 +155,7 @@ class SparkSQLSessionManager private (name: String, spark: SparkSession)
getSessionOption).getOrElse {
val sparkSession =
try {
getOrNewSparkSession(user)
getOrNewSparkSession(user, conf)
} catch {
case e: Exception => throw KyuubiSQLException(e)
}
@ -222,4 +227,8 @@ class SparkSQLSessionManager private (name: String, spark: SparkSession)
opHandle: OperationHandle): Path = {
new Path(getSessionResultSavePath(sessionHandle), opHandle.identifier.toString)
}
override private[kyuubi] def isEngineContextStopped = {
spark.sparkContext.isStopped
}
}

View File

@ -24,6 +24,7 @@ import org.apache.spark.sql.{AnalysisException, SparkSession}
import org.apache.spark.ui.SparkUIUtils.formatDuration
import org.apache.kyuubi.KyuubiSQLException
import org.apache.kyuubi.config.KyuubiConf.SESSION_IDLE_TIMEOUT
import org.apache.kyuubi.config.KyuubiReservedKeys.KYUUBI_SESSION_HANDLE_KEY
import org.apache.kyuubi.engine.spark.events.SessionEvent
import org.apache.kyuubi.engine.spark.operation.SparkSQLOperationManager
@ -57,6 +58,13 @@ class SparkSessionImpl(
}
}
override val sessionIdleTimeoutThreshold: Long = {
conf.get(SESSION_IDLE_TIMEOUT.key)
.map(_.toLong)
.getOrElse(
sessionManager.getConf.get(SESSION_IDLE_TIMEOUT))
}
private val sessionEvent = SessionEvent(this)
override def open(): Unit = {

View File

@ -0,0 +1,66 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.kyuubi.engine.spark.session
import scala.jdk.CollectionConverters._
import org.apache.kyuubi.config.KyuubiConf._
import org.apache.kyuubi.engine.spark.WithSparkSQLEngine
import org.apache.kyuubi.operation.HiveJDBCTestHelper
import org.apache.kyuubi.shaded.hive.service.rpc.thrift.{TExecuteStatementReq, TFetchResultsReq, TOpenSessionReq}
class MultiSessionSuiteInitSQLSuite extends WithSparkSQLEngine with HiveJDBCTestHelper {
override def withKyuubiConf: Map[String, String] = {
Map(
ENGINE_SHARE_LEVEL.key -> "SERVER",
ENGINE_SINGLE_SPARK_SESSION.key -> "false")
}
override protected def jdbcUrl: String =
s"jdbc:hive2://${engine.frontendServices.head.connectionUrl}/;#spark.ui.enabled=false"
test("isolated user spark session") {
Seq("abc", "xyz").foreach { value =>
withThriftClient(Some(user)) { client =>
val req = new TOpenSessionReq()
req.setUsername("user")
req.setPassword("anonymous")
req.setConfiguration(Map(
ENGINE_SHARE_LEVEL.key -> "SERVER",
ENGINE_SINGLE_SPARK_SESSION.key -> "false",
ENGINE_SESSION_SPARK_INITIALIZE_SQL.key -> s"SET varA=$value").asJava)
val tOpenSessionResp = client.OpenSession(req)
val tExecuteStatementReq = new TExecuteStatementReq()
tExecuteStatementReq.setSessionHandle(tOpenSessionResp.getSessionHandle)
tExecuteStatementReq.setStatement("SELECT '${varA}'")
tExecuteStatementReq.setRunAsync(false)
val tExecuteStatementResp = client.ExecuteStatement(tExecuteStatementReq)
val operationHandle = tExecuteStatementResp.getOperationHandle
val tFetchResultsReq = new TFetchResultsReq()
tFetchResultsReq.setOperationHandle(operationHandle)
tFetchResultsReq.setFetchType(0)
tFetchResultsReq.setMaxRows(1)
val tFetchResultsResp = client.FetchResults(tFetchResultsReq)
val ret = tFetchResultsResp.getResults.getColumns.get(0).getStringVal.getValues.get(0)
assert(ret === value)
}
}
}
}

View File

@ -345,10 +345,15 @@ abstract class SessionManager(name: String) extends CompositeService(name) {
if (idleTimeout > 0) {
val checkTask = new Runnable {
override def run(): Unit = {
if (!shutdown && System.currentTimeMillis() - latestLogoutTime > idleTimeout &&
getActiveUserSessionCount <= 0) {
info(s"Idled for more than $idleTimeout ms, terminating")
stop()
if (!shutdown) {
if (System.currentTimeMillis() - latestLogoutTime > idleTimeout &&
getActiveUserSessionCount <= 0) {
info(s"Idled for more than $idleTimeout ms, terminating")
stop()
} else if (isEngineContextStopped) {
error(s"Engine's SparkContext is stopped, terminating")
stop()
}
}
}
}
@ -360,4 +365,6 @@ abstract class SessionManager(name: String) extends CompositeService(name) {
TimeUnit.MILLISECONDS)
}
}
private[kyuubi] def isEngineContextStopped: Boolean = false
}

View File

@ -19,6 +19,7 @@ package org.apache.kyuubi.ha.client.zookeeper
import java.io.{File, IOException}
import java.nio.charset.StandardCharsets
import java.util.concurrent.ConcurrentHashMap
import javax.security.auth.login.Configuration
import scala.util.Random
@ -38,6 +39,13 @@ import org.apache.kyuubi.util.reflect.DynConstructors
object ZookeeperClientProvider extends Logging {
/**
* Share JAAS configuration for Zookeeper client with same keytab and principal to
* avoid server OOM due to each new JAAS configuration references the previous instance.
* See KYUUBI #7154 for more details.
*/
val jaasConfigurationCache = new ConcurrentHashMap[(String, String), Configuration]()
/**
* Create a [[CuratorFramework]] instance to be used as the ZooKeeper client
* Use the [[ZookeeperACLProvider]] to create appropriate ACLs
@ -113,22 +121,26 @@ object ZookeeperClientProvider extends Logging {
System.setProperty("zookeeper.server.principal", zkServerPrincipal)
}
val zkClientPrincipal = KyuubiHadoopUtils.getServerPrincipal(principal)
// HDFS-16591 makes breaking change on JaasConfiguration
val jaasConf = DynConstructors.builder()
.impl( // Hadoop 3.3.5 and above
"org.apache.hadoop.security.authentication.util.JaasConfiguration",
classOf[String],
classOf[String],
classOf[String])
.impl( // Hadoop 3.3.4 and previous
// scalastyle:off
"org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager$JaasConfiguration",
// scalastyle:on
classOf[String],
classOf[String],
classOf[String])
.build[Configuration]()
.newInstance("KyuubiZooKeeperClient", zkClientPrincipal, keytab)
val jaasConf = jaasConfigurationCache.computeIfAbsent(
(principal, keytab),
_ => {
// HDFS-16591 makes breaking change on JaasConfiguration
DynConstructors.builder()
.impl( // Hadoop 3.3.5 and above
"org.apache.hadoop.security.authentication.util.JaasConfiguration",
classOf[String],
classOf[String],
classOf[String])
.impl( // Hadoop 3.3.4 and previous
// scalastyle:off
"org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager$JaasConfiguration",
// scalastyle:on
classOf[String],
classOf[String],
classOf[String])
.build[Configuration]()
.newInstance("KyuubiZooKeeperClient", zkClientPrincipal, keytab)
})
Configuration.setConfiguration(jaasConf)
case _ =>
}

View File

@ -1598,6 +1598,18 @@
"node": ">=8"
}
},
"node_modules/call-bind-apply-helpers": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/call-bind-apply-helpers/-/call-bind-apply-helpers-1.0.2.tgz",
"integrity": "sha512-Sp1ablJ0ivDkSzjcaJdxEunN5/XvksFJ2sMBFfq6x0ryhQV/2b/KwFe21cMpmHtPOSij8K99/wSfoEuTObmuMQ==",
"dependencies": {
"es-errors": "^1.3.0",
"function-bind": "^1.1.2"
},
"engines": {
"node": ">= 0.4"
}
},
"node_modules/callsites": {
"version": "3.1.0",
"resolved": "https://registry.npmjs.org/callsites/-/callsites-3.1.0.tgz",
@ -1948,6 +1960,19 @@
"node": ">=12"
}
},
"node_modules/dunder-proto": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/dunder-proto/-/dunder-proto-1.0.1.tgz",
"integrity": "sha512-KIN/nDJBQRcXw0MLVhZE9iQHmG68qAVIBg9CqmUYjmQIhgij9U5MFvrqkUL5FbtyyzZuOeOt0zdeRe4UY7ct+A==",
"dependencies": {
"call-bind-apply-helpers": "^1.0.1",
"es-errors": "^1.3.0",
"gopd": "^1.2.0"
},
"engines": {
"node": ">= 0.4"
}
},
"node_modules/element-plus": {
"version": "2.2.13",
"resolved": "https://registry.npmjs.org/element-plus/-/element-plus-2.2.13.tgz",
@ -1985,6 +2010,47 @@
"url": "https://github.com/fb55/entities?sponsor=1"
}
},
"node_modules/es-define-property": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/es-define-property/-/es-define-property-1.0.1.tgz",
"integrity": "sha512-e3nRfgfUZ4rNGL232gUgX06QNyyez04KdjFrF+LTRoOXmrOgFKDg4BCdsjW8EnT69eqdYGmRpJwiPVYNrCaW3g==",
"engines": {
"node": ">= 0.4"
}
},
"node_modules/es-errors": {
"version": "1.3.0",
"resolved": "https://registry.npmjs.org/es-errors/-/es-errors-1.3.0.tgz",
"integrity": "sha512-Zf5H2Kxt2xjTvbJvP2ZWLEICxA6j+hAmMzIlypy4xcBg1vKVnx89Wy0GbS+kf5cwCVFFzdCFh2XSCFNULS6csw==",
"engines": {
"node": ">= 0.4"
}
},
"node_modules/es-object-atoms": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/es-object-atoms/-/es-object-atoms-1.1.1.tgz",
"integrity": "sha512-FGgH2h8zKNim9ljj7dankFPcICIK9Cp5bm+c2gQSYePhpaG5+esrLODihIorn+Pe6FGJzWhXQotPv73jTaldXA==",
"dependencies": {
"es-errors": "^1.3.0"
},
"engines": {
"node": ">= 0.4"
}
},
"node_modules/es-set-tostringtag": {
"version": "2.1.0",
"resolved": "https://registry.npmjs.org/es-set-tostringtag/-/es-set-tostringtag-2.1.0.tgz",
"integrity": "sha512-j6vWzfrGVfyXxge+O0x5sh6cvxAog0a/4Rdd2K36zCMV5eJ+/+tOAngRO8cODMNWbVRdVlmGZQL2YS3yR8bIUA==",
"dependencies": {
"es-errors": "^1.3.0",
"get-intrinsic": "^1.2.6",
"has-tostringtag": "^1.0.2",
"hasown": "^2.0.2"
},
"engines": {
"node": ">= 0.4"
}
},
"node_modules/esbuild": {
"version": "0.18.20",
"resolved": "https://registry.npmjs.org/esbuild/-/esbuild-0.18.20.tgz",
@ -2537,12 +2603,14 @@
}
},
"node_modules/form-data": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/form-data/-/form-data-4.0.0.tgz",
"integrity": "sha512-ETEklSGi5t0QMZuiXoA/Q6vcnxcLQP5vdugSpuAyi6SVGi2clPPp+xgEhuMaHC+zGgn31Kd235W35f7Hykkaww==",
"version": "4.0.4",
"resolved": "https://registry.npmjs.org/form-data/-/form-data-4.0.4.tgz",
"integrity": "sha512-KrGhL9Q4zjj0kiUt5OO4Mr/A/jlI2jDYs5eHBpYHPcBEVSiipAvn2Ko2HnPe20rmcuuvMHNdZFp+4IlGTMF0Ow==",
"dependencies": {
"asynckit": "^0.4.0",
"combined-stream": "^1.0.8",
"es-set-tostringtag": "^2.1.0",
"hasown": "^2.0.2",
"mime-types": "^2.1.12"
},
"engines": {
@ -2569,6 +2637,14 @@
"node": "^8.16.0 || ^10.6.0 || >=11.0.0"
}
},
"node_modules/function-bind": {
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/function-bind/-/function-bind-1.1.2.tgz",
"integrity": "sha512-7XHNxH7qX9xG5mIwxkhumTox/MIRNcOgDrxWsMt2pAr23WHp6MrRlN7FBSFpCpr+oVO0F744iUgR82nJMfG2SA==",
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/functional-red-black-tree": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/functional-red-black-tree/-/functional-red-black-tree-1.0.1.tgz",
@ -2584,6 +2660,41 @@
"node": "*"
}
},
"node_modules/get-intrinsic": {
"version": "1.3.0",
"resolved": "https://registry.npmjs.org/get-intrinsic/-/get-intrinsic-1.3.0.tgz",
"integrity": "sha512-9fSjSaos/fRIVIp+xSJlE6lfwhES7LNtKaCBIamHsjr2na1BiABJPo0mOjjz8GJDURarmCPGqaiVg5mfjb98CQ==",
"dependencies": {
"call-bind-apply-helpers": "^1.0.2",
"es-define-property": "^1.0.1",
"es-errors": "^1.3.0",
"es-object-atoms": "^1.1.1",
"function-bind": "^1.1.2",
"get-proto": "^1.0.1",
"gopd": "^1.2.0",
"has-symbols": "^1.1.0",
"hasown": "^2.0.2",
"math-intrinsics": "^1.1.0"
},
"engines": {
"node": ">= 0.4"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/get-proto": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/get-proto/-/get-proto-1.0.1.tgz",
"integrity": "sha512-sTSfBjoXBp89JvIKIefqw7U2CCebsc74kiY6awiGogKtoSGbgjYE/G/+l9sF3MWFPNc9IcoOC4ODfKHfxFmp0g==",
"dependencies": {
"dunder-proto": "^1.0.1",
"es-object-atoms": "^1.0.0"
},
"engines": {
"node": ">= 0.4"
}
},
"node_modules/get-stdin": {
"version": "8.0.0",
"resolved": "https://registry.npmjs.org/get-stdin/-/get-stdin-8.0.0.tgz",
@ -2662,6 +2773,17 @@
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/gopd": {
"version": "1.2.0",
"resolved": "https://registry.npmjs.org/gopd/-/gopd-1.2.0.tgz",
"integrity": "sha512-ZUKRh6/kUFoAiTAtTYPZJ3hw9wNxx+BIBOijnlG9PnrJsCcSjs1wyyD6vJpaYtgnzDrKYRSqf3OO6Rfa93xsRg==",
"engines": {
"node": ">= 0.4"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/grapheme-splitter": {
"version": "1.0.4",
"resolved": "https://registry.npmjs.org/grapheme-splitter/-/grapheme-splitter-1.0.4.tgz",
@ -2677,6 +2799,42 @@
"node": ">=8"
}
},
"node_modules/has-symbols": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/has-symbols/-/has-symbols-1.1.0.tgz",
"integrity": "sha512-1cDNdwJ2Jaohmb3sg4OmKaMBwuC48sYni5HUw2DvsC8LjGTLK9h+eb1X6RyuOHe4hT0ULCW68iomhjUoKUqlPQ==",
"engines": {
"node": ">= 0.4"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/has-tostringtag": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/has-tostringtag/-/has-tostringtag-1.0.2.tgz",
"integrity": "sha512-NqADB8VjPFLM2V0VvHUewwwsw0ZWBaIdgo+ieHtK3hasLz4qeCRjYcqfB6AQrBggRKppKF8L52/VqdVsO47Dlw==",
"dependencies": {
"has-symbols": "^1.0.3"
},
"engines": {
"node": ">= 0.4"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/hasown": {
"version": "2.0.2",
"resolved": "https://registry.npmjs.org/hasown/-/hasown-2.0.2.tgz",
"integrity": "sha512-0hJU9SCPvmMzIBdZFqNPXWa6dqh7WdH0cII9y+CyS8rG3nL48Bclra9HmKhVVUHyPWNH5Y7xDwAB7bfgSjkUMQ==",
"dependencies": {
"function-bind": "^1.1.2"
},
"engines": {
"node": ">= 0.4"
}
},
"node_modules/html-encoding-sniffer": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/html-encoding-sniffer/-/html-encoding-sniffer-3.0.0.tgz",
@ -3090,6 +3248,14 @@
"semver": "bin/semver.js"
}
},
"node_modules/math-intrinsics": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/math-intrinsics/-/math-intrinsics-1.1.0.tgz",
"integrity": "sha512-/IXtbwEk5HTPyEwyKX6hGkYXxM9nbj64B+ilVJnC/R6B0pH5G4V3b0pVbL7DBj4tkhBAppbQUlf6F6Xl9LHu1g==",
"engines": {
"node": ">= 0.4"
}
},
"node_modules/md5-hex": {
"version": "3.0.1",
"resolved": "https://registry.npmjs.org/md5-hex/-/md5-hex-3.0.1.tgz",
@ -5667,6 +5833,15 @@
"integrity": "sha512-b6Ilus+c3RrdDk+JhLKUAQfzzgLEPy6wcXqS7f/xe1EETvsDP6GORG7SFuOs6cID5YkqchW/LXZbX5bc8j7ZcQ==",
"dev": true
},
"call-bind-apply-helpers": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/call-bind-apply-helpers/-/call-bind-apply-helpers-1.0.2.tgz",
"integrity": "sha512-Sp1ablJ0ivDkSzjcaJdxEunN5/XvksFJ2sMBFfq6x0ryhQV/2b/KwFe21cMpmHtPOSij8K99/wSfoEuTObmuMQ==",
"requires": {
"es-errors": "^1.3.0",
"function-bind": "^1.1.2"
}
},
"callsites": {
"version": "3.1.0",
"resolved": "https://registry.npmjs.org/callsites/-/callsites-3.1.0.tgz",
@ -5935,6 +6110,16 @@
"webidl-conversions": "^7.0.0"
}
},
"dunder-proto": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/dunder-proto/-/dunder-proto-1.0.1.tgz",
"integrity": "sha512-KIN/nDJBQRcXw0MLVhZE9iQHmG68qAVIBg9CqmUYjmQIhgij9U5MFvrqkUL5FbtyyzZuOeOt0zdeRe4UY7ct+A==",
"requires": {
"call-bind-apply-helpers": "^1.0.1",
"es-errors": "^1.3.0",
"gopd": "^1.2.0"
}
},
"element-plus": {
"version": "2.2.13",
"resolved": "https://registry.npmjs.org/element-plus/-/element-plus-2.2.13.tgz",
@ -5963,6 +6148,35 @@
"integrity": "sha512-o4q/dYJlmyjP2zfnaWDUC6A3BQFmVTX+tZPezK7k0GLSU9QYCauscf5Y+qcEPzKL+EixVouYDgLQK5H9GrLpkg==",
"dev": true
},
"es-define-property": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/es-define-property/-/es-define-property-1.0.1.tgz",
"integrity": "sha512-e3nRfgfUZ4rNGL232gUgX06QNyyez04KdjFrF+LTRoOXmrOgFKDg4BCdsjW8EnT69eqdYGmRpJwiPVYNrCaW3g=="
},
"es-errors": {
"version": "1.3.0",
"resolved": "https://registry.npmjs.org/es-errors/-/es-errors-1.3.0.tgz",
"integrity": "sha512-Zf5H2Kxt2xjTvbJvP2ZWLEICxA6j+hAmMzIlypy4xcBg1vKVnx89Wy0GbS+kf5cwCVFFzdCFh2XSCFNULS6csw=="
},
"es-object-atoms": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/es-object-atoms/-/es-object-atoms-1.1.1.tgz",
"integrity": "sha512-FGgH2h8zKNim9ljj7dankFPcICIK9Cp5bm+c2gQSYePhpaG5+esrLODihIorn+Pe6FGJzWhXQotPv73jTaldXA==",
"requires": {
"es-errors": "^1.3.0"
}
},
"es-set-tostringtag": {
"version": "2.1.0",
"resolved": "https://registry.npmjs.org/es-set-tostringtag/-/es-set-tostringtag-2.1.0.tgz",
"integrity": "sha512-j6vWzfrGVfyXxge+O0x5sh6cvxAog0a/4Rdd2K36zCMV5eJ+/+tOAngRO8cODMNWbVRdVlmGZQL2YS3yR8bIUA==",
"requires": {
"es-errors": "^1.3.0",
"get-intrinsic": "^1.2.6",
"has-tostringtag": "^1.0.2",
"hasown": "^2.0.2"
}
},
"esbuild": {
"version": "0.18.20",
"resolved": "https://registry.npmjs.org/esbuild/-/esbuild-0.18.20.tgz",
@ -6368,12 +6582,14 @@
"integrity": "sha512-wWN62YITEaOpSK584EZXJafH1AGpO8RVgElfkuXbTOrPX4fIfOyEpW/CsiNd8JdYrAoOvafRTOEnvsO++qCqFA=="
},
"form-data": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/form-data/-/form-data-4.0.0.tgz",
"integrity": "sha512-ETEklSGi5t0QMZuiXoA/Q6vcnxcLQP5vdugSpuAyi6SVGi2clPPp+xgEhuMaHC+zGgn31Kd235W35f7Hykkaww==",
"version": "4.0.4",
"resolved": "https://registry.npmjs.org/form-data/-/form-data-4.0.4.tgz",
"integrity": "sha512-KrGhL9Q4zjj0kiUt5OO4Mr/A/jlI2jDYs5eHBpYHPcBEVSiipAvn2Ko2HnPe20rmcuuvMHNdZFp+4IlGTMF0Ow==",
"requires": {
"asynckit": "^0.4.0",
"combined-stream": "^1.0.8",
"es-set-tostringtag": "^2.1.0",
"hasown": "^2.0.2",
"mime-types": "^2.1.12"
}
},
@ -6390,6 +6606,11 @@
"dev": true,
"optional": true
},
"function-bind": {
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/function-bind/-/function-bind-1.1.2.tgz",
"integrity": "sha512-7XHNxH7qX9xG5mIwxkhumTox/MIRNcOgDrxWsMt2pAr23WHp6MrRlN7FBSFpCpr+oVO0F744iUgR82nJMfG2SA=="
},
"functional-red-black-tree": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/functional-red-black-tree/-/functional-red-black-tree-1.0.1.tgz",
@ -6402,6 +6623,32 @@
"integrity": "sha512-8vXOvuE167CtIc3OyItco7N/dpRtBbYOsPsXCz7X/PMnlGjYjSGuZJgM1Y7mmew7BKf9BqvLX2tnOVy1BBUsxQ==",
"dev": true
},
"get-intrinsic": {
"version": "1.3.0",
"resolved": "https://registry.npmjs.org/get-intrinsic/-/get-intrinsic-1.3.0.tgz",
"integrity": "sha512-9fSjSaos/fRIVIp+xSJlE6lfwhES7LNtKaCBIamHsjr2na1BiABJPo0mOjjz8GJDURarmCPGqaiVg5mfjb98CQ==",
"requires": {
"call-bind-apply-helpers": "^1.0.2",
"es-define-property": "^1.0.1",
"es-errors": "^1.3.0",
"es-object-atoms": "^1.1.1",
"function-bind": "^1.1.2",
"get-proto": "^1.0.1",
"gopd": "^1.2.0",
"has-symbols": "^1.1.0",
"hasown": "^2.0.2",
"math-intrinsics": "^1.1.0"
}
},
"get-proto": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/get-proto/-/get-proto-1.0.1.tgz",
"integrity": "sha512-sTSfBjoXBp89JvIKIefqw7U2CCebsc74kiY6awiGogKtoSGbgjYE/G/+l9sF3MWFPNc9IcoOC4ODfKHfxFmp0g==",
"requires": {
"dunder-proto": "^1.0.1",
"es-object-atoms": "^1.0.0"
}
},
"get-stdin": {
"version": "8.0.0",
"resolved": "https://registry.npmjs.org/get-stdin/-/get-stdin-8.0.0.tgz",
@ -6453,6 +6700,11 @@
"slash": "^3.0.0"
}
},
"gopd": {
"version": "1.2.0",
"resolved": "https://registry.npmjs.org/gopd/-/gopd-1.2.0.tgz",
"integrity": "sha512-ZUKRh6/kUFoAiTAtTYPZJ3hw9wNxx+BIBOijnlG9PnrJsCcSjs1wyyD6vJpaYtgnzDrKYRSqf3OO6Rfa93xsRg=="
},
"grapheme-splitter": {
"version": "1.0.4",
"resolved": "https://registry.npmjs.org/grapheme-splitter/-/grapheme-splitter-1.0.4.tgz",
@ -6465,6 +6717,27 @@
"integrity": "sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ==",
"dev": true
},
"has-symbols": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/has-symbols/-/has-symbols-1.1.0.tgz",
"integrity": "sha512-1cDNdwJ2Jaohmb3sg4OmKaMBwuC48sYni5HUw2DvsC8LjGTLK9h+eb1X6RyuOHe4hT0ULCW68iomhjUoKUqlPQ=="
},
"has-tostringtag": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/has-tostringtag/-/has-tostringtag-1.0.2.tgz",
"integrity": "sha512-NqADB8VjPFLM2V0VvHUewwwsw0ZWBaIdgo+ieHtK3hasLz4qeCRjYcqfB6AQrBggRKppKF8L52/VqdVsO47Dlw==",
"requires": {
"has-symbols": "^1.0.3"
}
},
"hasown": {
"version": "2.0.2",
"resolved": "https://registry.npmjs.org/hasown/-/hasown-2.0.2.tgz",
"integrity": "sha512-0hJU9SCPvmMzIBdZFqNPXWa6dqh7WdH0cII9y+CyS8rG3nL48Bclra9HmKhVVUHyPWNH5Y7xDwAB7bfgSjkUMQ==",
"requires": {
"function-bind": "^1.1.2"
}
},
"html-encoding-sniffer": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/html-encoding-sniffer/-/html-encoding-sniffer-3.0.0.tgz",
@ -6784,6 +7057,11 @@
}
}
},
"math-intrinsics": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/math-intrinsics/-/math-intrinsics-1.1.0.tgz",
"integrity": "sha512-/IXtbwEk5HTPyEwyKX6hGkYXxM9nbj64B+ilVJnC/R6B0pH5G4V3b0pVbL7DBj4tkhBAppbQUlf6F6Xl9LHu1g=="
},
"md5-hex": {
"version": "3.0.1",
"resolved": "https://registry.npmjs.org/md5-hex/-/md5-hex-3.0.1.tgz",

20
pom.xml
View File

@ -127,7 +127,7 @@
<antlr.st4.version>4.3.4</antlr.st4.version>
<apache.archive.dist>https://archive.apache.org/dist</apache.archive.dist>
<atlas.version>2.3.0</atlas.version>
<byte-buddy.version>1.14.15</byte-buddy.version>
<byte-buddy.version>1.17.6</byte-buddy.version>
<bouncycastle.version>1.78</bouncycastle.version>
<codahale.metrics.version>4.2.30</codahale.metrics.version>
<commons-cli.version>1.5.0</commons-cli.version>
@ -172,7 +172,7 @@
<jetty.version>9.4.57.v20241219</jetty.version>
<jline.version>2.14.6</jline.version>
<junit.version>4.13.2</junit.version>
<kafka.version>3.5.2</kafka.version>
<kafka.version>3.9.1</kafka.version>
<kubernetes-client.version>6.13.5</kubernetes-client.version>
<kyuubi-relocated.version>0.6.0</kyuubi-relocated.version>
<kyuubi-relocated-zookeeper.artifacts>kyuubi-relocated-zookeeper-34</kyuubi-relocated-zookeeper.artifacts>
@ -1572,6 +1572,22 @@
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>${maven.plugin.shade.version}</version>
<dependencies>
<!--
TODO: Remove ASM version management once upstream change released
https://github.com/apache/maven-shade-plugin/pull/744
-->
<dependency>
<groupId>org.ow2.asm</groupId>
<artifactId>asm</artifactId>
<version>9.8</version>
</dependency>
<dependency>
<groupId>org.ow2.asm</groupId>
<artifactId>asm-commons</artifactId>
<version>9.8</version>
</dependency>
</dependencies>
</plugin>
<plugin>