### Why are the changes needed?
The change is needed to be able to add additional labels to `PrometheusRule` similar to [podMonitor](523722788f/charts/kyuubi/values.yaml (L321-L330)) and [serviceMonitor](523722788f/charts/kyuubi/values.yaml (L333-L341)).
The PR also includes minor identation fixes.
### How was this patch tested?
```shell
helm template kyuubi charts/kyuubi --set metrics.prometheusRule.enabled=true --set metrics.prometheusRule.labels.test-label=true -s templates/kyuubi-alert.yaml
---
# Source: kyuubi/templates/kyuubi-alert.yaml
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: kyuubi
labels:
helm.sh/chart: kyuubi-0.1.0
app.kubernetes.io/name: kyuubi
app.kubernetes.io/instance: kyuubi
app.kubernetes.io/version: "1.10.0"
app.kubernetes.io/managed-by: Helm
test-label: true
spec:
groups:
[]
```
### Was this patch authored or co-authored using generative AI tooling?
No
Closes#7105 from dnskr/helm-prometheusRule-labels.
Closes#7105
234d99da3 [dnskr] [K8S][HELM] Support additional labels for PrometheusRule
Authored-by: dnskr <dnskrv88@gmail.com>
Signed-off-by: Cheng Pan <chengpan@apache.org>
### Why are the changes needed?
A routine work.
### How was this patch tested?
Review.
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes#7076 from pan3793/minor.
Closes#7076
546fb5196 [Cheng Pan] Update known_translations
Authored-by: Cheng Pan <chengpan@apache.org>
Signed-off-by: Cheng Pan <chengpan@apache.org>
### Why are the changes needed?
Bugfix. Spark 3.5 is returning `None` for `response.results.columns`, while Spark 3.3 returned actual values.
The response here: https://github.com/apache/kyuubi/blob/master/python/pyhive/hive.py#L507
For a query that does nothing (mine was an `add jar s3://a/b/c.jar`), here are the responses I received.
Spark 3.3:
```
TFetchResultsResp(status=TStatus(statusCode=0, infoMessages=None, sqlState=None, errorCode=None, errorMessage=None), hasMoreRows=False, results=TRowSet(startRowOffset=0, rows=[], columns=[TColumn(boolVal=None, byteVal=None, i16Val=None, i32Val=None, i64Val=None, doubleVal=None, stringVal=TStringColumn(values=[], nulls=b'\x00'), binaryVal=None)], binaryColumns=None, columnCount=None))
```
Spark 3.5:
```
TFetchResultsResp(status=TStatus(statusCode=0, infoMessages=None, sqlState=None, errorCode=None, errorMessage=None), hasMoreRows=False, results=TRowSet(startRowOffset=0, rows=[], columns=None, binaryColumns=None, columnCount=None))
```
### How was this patch tested?
I tested by applying it locally and running my query against Spark 3.5. I was not able to get any unit tests running, sorry!
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes#7107 from fbertsch/spark_3_5_fix.
Closes#7106
13d1440a8 [Frank Bertsch] Make response.results.columns optional
Authored-by: Frank Bertsch <fbertsch@netflix.com>
Signed-off-by: Cheng Pan <chengpan@apache.org>
### Why are the changes needed?
Support to reassign the batches to alternative kyuubi instance in case kyuubi instance lost.
https://github.com/apache/kyuubi/issues/6884
### How was this patch tested?
Unit Test
### Was this patch authored or co-authored using generative AI tooling?
No
Closes#7037 from George314159/6884.
Closes#6884
8565d4aaa [Wang, Fei] KYUUBI_SESSION_CONNECTION_URL_KEY
22d4539e2 [Wang, Fei] admin
075654cb3 [Wang, Fei] check admin
5654a99f4 [Wang, Fei] log and lock
a19e2edf5 [Wang, Fei] minor comments
a60f23ba3 [George314159] refine
760e10f89 [George314159] Update Based On Comments
75f1ee2a9 [Fei Wang] ping (#1)
f42bcaf9a [George314159] Update Based on Comments
1bea70ed6 [George314159] [KYUUBI-6884] Support to reassign the batches to alternative kyuubi instance in case kyuubi instance lost
Lead-authored-by: Wang, Fei <fwang12@ebay.com>
Co-authored-by: George314159 <hua16732@gmail.com>
Co-authored-by: Fei Wang <cn.feiwang@gmail.com>
Signed-off-by: Wang, Fei <fwang12@ebay.com>
### Why are the changes needed?
To prevent the terminated app pods leak if the events missed during kyuubi server restart.
### How was this patch tested?
Manual test.
```
:2025-06-17 17:50:37.275 INFO [main] org.apache.kyuubi.engine.KubernetesApplicationOperation: [KubernetesInfo(Some(28),Some(dls-prod))] Found existing pod kyuubi-xb406fc5-7b0b-4fdf-8531-929ed2ae250d-8998-5b406fc5-7b0b-4fdf-8531-929ed2ae250d-8998-90c0b328-930f-11ed-a1eb-0242ac120002-0-20250423211008-grectg-stm-17da59fe-caf4-41e4-a12f-6c1ed9a293f9-driver with label: kyuubi-unique-tag=17da59fe-caf4-41e4-a12f-6c1ed9a293f9 in app state FINISHED, marking it as terminated
2025-06-17 17:50:37.278 INFO [main] org.apache.kyuubi.engine.KubernetesApplicationOperation: [KubernetesInfo(Some(28),Some(dls-prod))] Found existing pod kyuubi-xb406fc5-7b0b-4fdf-8531-929ed2ae250d-8998-5b406fc5-7b0b-4fdf-8531-929ed2ae250d-8998-90c0b328-930f-11ed-a1eb-0242ac120002-0-20250423212011-gpdtsi-stm-6a23000f-10be-4a42-ae62-4fa2da8fac07-driver with label: kyuubi-unique-tag=6a23000f-10be-4a42-ae62-4fa2da8fac07 in app state FINISHED, marking it as terminated
```
The pods are cleaned up eventually.
<img width="664" alt="image" src="https://github.com/user-attachments/assets/8cf58f61-065f-4fb0-9718-2e3c00e8d2e0" />
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes#7101 from turboFei/pod_cleanup.
Closes#7101
7f76cf57c [Wang, Fei] async
11c9db25d [Wang, Fei] cleanup
Authored-by: Wang, Fei <fwang12@ebay.com>
Signed-off-by: Wang, Fei <fwang12@ebay.com>
### Why are the changes needed?
https://github.com/delta-io/delta/releases/tag/v4.0.0
### How was this patch tested?
GHA.
### Was this patch authored or co-authored using generative AI tooling?
No
Closes#7103 from pan3793/delta-4.0.
Closes#7103
febaa11ab [Cheng Pan] Bump Delta 4.0.0 and enable Delta tests for Spark 4.0
Authored-by: Cheng Pan <chengpan@apache.org>
Signed-off-by: Cheng Pan <chengpan@apache.org>
### Why are the changes needed?
Upgrade Maven to the latest version to speed up `build/mvn` downloading, as the previous versions are not available at https://dlcdn.apache.org/maven/maven-3/
### How was this patch tested?
Pass GHA,
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes#7104 from pan3793/maven-3.9.10.
Closes#7104
48aa9a232 [Cheng Pan] Bump Maven 3.9.10
Authored-by: Cheng Pan <chengpan@apache.org>
Signed-off-by: Cheng Pan <chengpan@apache.org>
### Why are the changes needed?
Support adding arbitrary annotations to Kyuubi pods and services - for example, those needed for annotation-based auto-discovery via [k8s-monitoring-helm](https://github.com/grafana/k8s-monitoring-helm/blob/main/charts/k8s-monitoring/docs/examples/features/annotation-autodiscovery/default/README.md)
### How was this patch tested?
Helm chart installs with and without annotations added
### Was this patch authored or co-authored using generative AI tooling?
No
Closes#7098 from jasonj/master.
Closes#7098
70f740d03 [jasonj] Add ability to annotate pods and headless service
Authored-by: jasonj <jason@interval.xyz>
Signed-off-by: Kent Yao <yao@apache.org>
### Why are the changes needed?
Respect terminated app state when building batch info from metadata
It is a followup for https://github.com/apache/kyuubi/pull/2911,
9e40e39c39/kyuubi-server/src/main/scala/org/apache/kyuubi/server/api/v1/BatchesResource.scala (L128-L142)
1. if the kyuubi instance is unreachable during maintain window.
2. the batch app state has been terminated, and the app stated was backfilled by another kyuubi instance peer, see #2911
3. the batch state in the metadata table is still PENDING/RUNNING
4. return the terminated batch state for such case instead of `PENDING or RUNNING`.
### How was this patch tested?
GA and IT.
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes#7095 from turboFei/always_respect_appstate.
Closes#7095
ec72666c9 [Wang, Fei] rename
bc74a9c56 [Wang, Fei] if op not terminated
e786c8d9b [Wang, Fei] respect terminated app state when building batch info from metadata
Authored-by: Wang, Fei <fwang12@ebay.com>
Signed-off-by: Wang, Fei <fwang12@ebay.com>
### Why are the changes needed?
MetricsSystem is only used for KyuubiServer, all the metrics config items are server only.
### How was this patch tested?
GA.
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes#7094 from turboFei/serverOnly.
Closes#7094
8324419dd [Wang, Fei] Add server only flag for metrics conf
Authored-by: Wang, Fei <fwang12@ebay.com>
Signed-off-by: Wang, Fei <fwang12@ebay.com>
### Why are the changes needed?
Write sorted authors to release contributors file
### How was this patch tested?
```
(base) ➜ kyuubi git:(sort_author) ✗ RELEASE_TAG=v1.8.1 PREVIOUS_RELEASE_TAG=v1.8.0 ./build/release/pre_gen_release_notes.py
(base) ➜ kyuubi git:(sort_author) ✗ cat build/release/contributors-v1.8.1.txt
* Binjie Yang
* Bowen Liang
* Chao Chen
* Cheng Pan
* David Yuan
* Fei Wang
* Flyangz
* Gianluca Principini
* He Zhao
* Junjie Ma
* Kaifei Yi
* Kang Wang
* liaoyt
* Mingliang Zhu
* mrtisttt
* Ocean22
* Paul Lin
* Peiyue Liu
* Pengqi Li
* Senmiao Liu
* Shaoyun Chen
* SwordyZhao
* Tao Wang
* William Tong
* Xiao Liu
* Yi Zhu
* Yifan Zhou
* Yuwei Zhan
* Zeyu Wang
* Zhen Wang
* Zhiming She
```
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes#7097 from turboFei/sort_author.
Closes#7097
45dfb8f1e [Wang, Fei] Write sorted authors
Authored-by: Wang, Fei <fwang12@ebay.com>
Signed-off-by: Wang, Fei <fwang12@ebay.com>
### Why are the changes needed?
To show how many metadata records cleaned up.
### How was this patch tested?
```
(base) ➜ kyuubi git:(delete_metadata) grep 'Cleaned up' target/unit-tests.log
01:58:17.109 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.124 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.144 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.161 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.180 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.199 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.216 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.236 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.253 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.270 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.290 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.310 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.327 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.348 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.368 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.384 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.400 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.419 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.437 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.456 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.475 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.493 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.513 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.533 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.551 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.569 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.590 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.611 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.631 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.651 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.668 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.688 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.705 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.725 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.744 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.764 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.784 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.801 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.822 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.849 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.870 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.889 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.910 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.929 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.948 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.970 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:17.994 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:18.014 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:18.032 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:18.050 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:18.069 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:18.086 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 0 records older than 1000 ms from metadata.
01:58:18.108 ScalaTest-run-running-JDBCMetadataStoreSuite INFO JDBCMetadataStore: Cleaned up 1 records older than 1000 ms from metadata.
01:58:18.162 ScalaTest-run INFO JDBCMetadataStore: Cleaned up 0 records older than 0 ms from k8s_engine_info.
```
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes#7093 from turboFei/delete_metadata.
Closes#7093
e0cf300f8 [Wang, Fei] update
Authored-by: Wang, Fei <fwang12@ebay.com>
Signed-off-by: Wang, Fei <fwang12@ebay.com>
### Why are the changes needed?
The metrics `kyuubi_operation_state_LaunchEngine_*` cannot reflect the state of Semaphore after configuring the maximum engine startup limit through `kyuubi.server.limit.engine.startup`, add some metrics to show the relevant permit state.
### How was this patch tested?
### Was this patch authored or co-authored using generative AI tooling?
Closes#7072 from LennonChin/engine_startup_metrics.
Closes#7072
d6bf3696a [Lennon Chin] Expose metrics of engine startup permit status
Authored-by: Lennon Chin <i@coderap.com>
Signed-off-by: Cheng Pan <chengpan@apache.org>
### Why are the changes needed?
To enhance the MaxScanStrategy in Spark's DSv2 to ensure it only works for relations that support statistics reporting. This prevents Spark from returning a default value of Long.MaxValue, which, leads to some queries failing or behaving unexpectedly.
### How was this patch tested?
It tested out locally.
### Was this patch authored or co-authored using generative AI tooling?
No
Closes#7077 from zhaohehuhu/dev-0527.
Closes#7077
64001c94e [zhaohehuhu] fix MaxScanStrategy for datasource v2
Authored-by: zhaohehuhu <luoyedeyi459@163.com>
Signed-off-by: Cheng Pan <chengpan@apache.org>
### Why are the changes needed?
Iceberg ranger check support branch and tag ddl
### How was this patch tested?
- [x] create branch
- [x] replace branch
- [x] drop branch
- [x] create tag
- [x] replace tag
- [x] drop tag
issue #7068
### Was this patch authored or co-authored using generative AI tooling?
Closes#7069 from davidyuan1223/iceberg_branch_check.
Closes#7068
d060a24e1 [davidyuan] update
1e05018d1 [davidyuan] Merge branch 'master' into iceberg_branch_check
be2684671 [davidyuan] update
231ed3356 [davidyuan] sort spi file
6d2a5bf20 [davidyuan] sort spi file
bc21310cc [davidyuan] update
52ca367f1 [davidyuan] update
Authored-by: davidyuan <yuanfuyuan@mafengwo.com>
Signed-off-by: Cheng Pan <chengpan@apache.org>
### Why are the changes needed?
Fix a typo of file name.
### How was this patch tested?
Review.
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes#7074 from pan3793/6870-f.
Closes#6870
45915d978 [Cheng Pan] [KYUUBI #6870][FOLLOWUP] Correct file name of grafana/README.md
Authored-by: Cheng Pan <chengpan@apache.org>
Signed-off-by: Cheng Pan <chengpan@apache.org>
### Why are the changes needed?
Test Spark 4.0.0 RC1
https://lists.apache.org/thread/3sx86qhnmot1p519lloyprxv9h7nt2xh
### How was this patch tested?
GHA.
### Was this patch authored or co-authored using generative AI tooling?
No
Closes#6928 from pan3793/spark-4.0.0.
Closes#6928
a910169bd [Cheng Pan] Bump Spark 4.0.0
Authored-by: Cheng Pan <chengpan@apache.org>
Signed-off-by: Cheng Pan <chengpan@apache.org>
### Why are the changes needed?
Retry on deploying failure to overcome the transient issues.
### How was this patch tested?
Review.
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes#7073 from pan3793/deploy-retry.
Closes#7073
f42bd663b [Cheng Pan] Retry 3 times on deploying to nexus
Authored-by: Cheng Pan <chengpan@apache.org>
Signed-off-by: Cheng Pan <chengpan@apache.org>
### Why are the changes needed?
https://kyuubi.apache.org/shaded-release/0.5.0.html
### How was this patch tested?
Pass GHA.
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes#7063 from pan3793/kyuubi-shaded-0.5.0.
Closes#7063
b202a7c83 [Cheng Pan] Update pom.xml
417914529 [Cheng Pan] Bump Kyuubi Shaded 0.5.0
Authored-by: Cheng Pan <chengpan@apache.org>
Signed-off-by: Cheng Pan <chengpan@apache.org>
#7066
### Why are the changes needed?
Iceberg missing some check, this pr try to fix add partition field check
### How was this patch tested?
### Was this patch authored or co-authored using generative AI tooling?
Closes#7065 from davidyuan1223/icerberg_authz.
Closes#7065
be2684671 [davidyuan] update
231ed3356 [davidyuan] sort spi file
6d2a5bf20 [davidyuan] sort spi file
bc21310cc [davidyuan] update
52ca367f1 [davidyuan] update
Authored-by: davidyuan <yuanfuyuan@mafengwo.com>
Signed-off-by: Kent Yao <yao@apache.org>
### Why are the changes needed?
There were some breaking changes after we fixed compatibility for Spark 4.0.0 RC1 in #6920, but now Spark has reached 4.0.0 RC6, which has less chance to receive more breaking changes.
### How was this patch tested?
Changes are extracted from https://github.com/apache/kyuubi/pull/6928, which passed CI with Spark 4.0.0 RC6
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes#7061 from pan3793/6920-followup.
Closes#6920
17a1bd9e5 [Cheng Pan] [KYUUBI #6920][FOLLOWUP] Spark SQL engine supports Spark 4.0
Authored-by: Cheng Pan <chengpan@apache.org>
Signed-off-by: Cheng Pan <chengpan@apache.org>
### Why are the changes needed?
https://github.com/delta-io/delta/releases/tag/v3.3.1
### How was this patch tested?
Pass GHA.
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes#7062 from pan3793/delta-3.3.1.
Closes#7062
0fc1df8f9 [Cheng Pan] Bump DeltaLake 3.3.1
Authored-by: Cheng Pan <chengpan@apache.org>
Signed-off-by: Cheng Pan <chengpan@apache.org>
### Why are the changes needed?
To filter out server only configs with prefixes.
For some kyuubi configs, there is no related defined ConfigEntry, and we can not filter out them and have to populate them to engien end.
For example:
```
kyuubi.kubernetes.28.master.address=k8s://master
kyuubi.backend.server.event.kafka.broker=localhost:9092
kyuubi.metadata.store.jdbc.driver=com.mysql.cj.jdbc.Driver
kyuubi.metadata.store.jdbc.datasource.maximumPoolSize=600
kyuubi.metadata.store.jdbc.datasource.minimumIdle=100
kyuubi.metadata.store.jdbc.datasource.idleTimeout=60000
```
This PR supports to exclude them by setting:
```
kyuubi.config.server.only.prefixes=kyuubi.backend.server.event.kafka.,kyuubi.metadata.store.jdbc.datasource.,kyuubi.kubernetes.28.
```
### How was this patch tested?
UT
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes#7055 from turboFei/server_only_configs.
Closes#7055
6c804ff91 [Cheng Pan] Update kyuubi-common/src/main/scala/org/apache/kyuubi/config/KyuubiConf.scala
bd391a664 [Wang, Fei] exclude
Lead-authored-by: Fei Wang <fwang12@ebay.com>
Co-authored-by: Cheng Pan <pan3793@gmail.com>
Co-authored-by: Wang, Fei <fwang12@ebay.com>
Signed-off-by: Wang, Fei <fwang12@ebay.com>
### Why are the changes needed?
If `sslTrustStore` is not provided `org.apache.hadoop.conf.Configuration` class existence becomes a hard dependency.
This makes jdbc client too complex to configure: extra Hadoop jars should be provided.
`hadoopCredentialProviderAvailable` variable is useless in the previous implementation logic because it's always `true` or the code is not reachable.
<img width="898" alt="Screenshot 2025-05-09 at 13 05 12" src="https://github.com/user-attachments/assets/6d202555-38c6-40d2-accb-eb78a3d4184e" />
### How was this patch tested?
Build jar and used it to connect from DataGrip.
<img width="595" alt="Screenshot 2025-05-09 at 13 01 29" src="https://github.com/user-attachments/assets/c6e4d904-a3dd-4d3f-9bdd-8bb47ed1e834" />
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes#7058 from Khrol/master.
Closes#7051
b594757a0 [Igor Khrol] JDBC driver: allow usage without sslTrustStore
Authored-by: Igor Khrol <khroliz@gmail.com>
Signed-off-by: Cheng Pan <chengpan@apache.org>
### Why are the changes needed?
Reduce the kyuubi server end configs involved into engine end.
### How was this patch tested?
UT and code review.
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes#7054 from turboFei/server_only.
Closes#7054
d5855a5db [Wang, Fei] revert kubernetes
b253c336b [Wang, Fei] init
Authored-by: Wang, Fei <fwang12@ebay.com>
Signed-off-by: Wang, Fei <fwang12@ebay.com>
### Why are the changes needed?
As clarified in https://github.com/apache/kyuubi/issues/6926, there are some scenarios user want to launch engine on each kyuubi server. SERVER_LOCAL engine share level implement this function by extracting local host address as subdomain, in which case each kyuubi server's engine is unique.
### How was this patch tested?
### Was this patch authored or co-authored using generative AI tooling?
No
Closes#7013 from taylor12805/share_level_server_local.
Closes#6926
ba201bb72 [taylor.fan] [KYUUBI #6926] update format
42f0a4f7d [taylor.fan] [KYUUBI #6926] move host address to subdomain
e06de79ad [taylor.fan] [KYUUBI #6926] Add SERVER_LOCAL engine share level
Authored-by: taylor.fan <taylor.fan@vipshop.com>
Signed-off-by: Kent Yao <yao@apache.org>
This patch adds try/except block to prevent `KeyError` when mapping unknown `type_id` in Hive schema parsing. Now, if a `type_id` is not recognized, `type_code` is set to `None` instead of raising an exception.
### Why are the changes needed?
Previously, when parsing Hive table schemas, the code attempts to map each `type_id` to a human-readable type name via `ttypes.TTypeId._VALUES_TO_NAMES[type_id]`. If Hive introduced an unknown or custom type (e.g. some might using an non-standard version or data pumping from a totally different data source like *Oracle* into *Hive* databases), a `KeyError` was raised, interrupting the entire SQL query process. This patch adds a `try/except` block so that unrecognized `type_id`s will set `type_code` to `None` instead of raising an error so that the downstream user can decided what to do instead of just an Exception. This makes schema inspection more robust and compatible with evolving Hive data types.
### How was this patch tested?
The patch was tested by running schema inspection on tables containing both standard and unknown/custom Hive column types. For known types, parsing behaves as before. For unknown types, the parser sets `type_code` to `None` without raising an exception, and the rest of the process completes successfully. No unit test was added since this is an edge case dependent on unreachable or custom Hive types, but was tested on typical use cases.
### Was this patch authored or co-authored using generative AI tooling?
No. 😂 It's a minor patch.
Closes#7048 from ZsgsDesign/patch-1.
Closes#7048
4d246d0ec [John Zhang] fix: handle KeyError when parsing Hive type_id mapping
Authored-by: John Zhang <zsgsdesign@gmail.com>
Signed-off-by: Kent Yao <yao@apache.org>
### Why are the changes needed?
1. Persist the kubernetes application terminate info into metastore to prevent the event lose.
2. If it can not get the application info from informer application info store, fallback to get the application info from metastore instead of return NOT_FOUND directly.
3. It is critical because if we return false application state, it might cause data quality issue.
### How was this patch tested?
UT and IT.
<img width="1917" alt="image" src="https://github.com/user-attachments/assets/306f417c-5037-4869-904d-dcf657ff8f60" />
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes#7029 from turboFei/kubernetes_state.
Closes#7028
9f2badef3 [Wang, Fei] generic dialect
186cc690d [Wang, Fei] nit
82ea62669 [Wang, Fei] Add pod name
4c59bebb5 [Wang, Fei] Refine
327a0d594 [Wang, Fei] Remove create_time from k8s engine info
12c24b1d0 [Wang, Fei] do not use MYSQL deprecated VALUES(col)
becf9d1a7 [Wang, Fei] insert or replace
d167623c1 [Wang, Fei] migration
Authored-by: Wang, Fei <fwang12@ebay.com>
Signed-off-by: Wang, Fei <fwang12@ebay.com>
### Why are the changes needed?
Add an option to allow construct the batch info from metadata directly instead of redirecting the requests to reduce the RPC latency.
### How was this patch tested?
Minor change and Existing GA.
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes#7043 from turboFei/support_no_redirect.
Closes#7043
7f7a2fb80 [Wang, Fei] comments
bb0e324a1 [Wang, Fei] save
Authored-by: Wang, Fei <fwang12@ebay.com>
Signed-off-by: Wang, Fei <fwang12@ebay.com>
### Why are the changes needed?
Followup for #7034 to fix the SparkOnKubernetesTestsSuite.
Sorry, I forget that the appInfo name and pod name were deeply bound before, the appInfo name was used as pod name and used to delete pod.
In this PR, we add `podName` into applicationInfo to separate app name and pod name.
### How was this patch tested?
GA should pass.
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes#7039 from turboFei/fix_test.
Closes#7034
0ff7018d6 [Wang, Fei] revert
18e48c079 [Wang, Fei] comments
19f34bc83 [Wang, Fei] do not get pod name from appName
c1d308437 [Wang, Fei] reduce interval for test stability
50fad6bc5 [Wang, Fei] fix ut
Authored-by: Wang, Fei <fwang12@ebay.com>
Signed-off-by: Wang, Fei <fwang12@ebay.com>
### Why are the changes needed?
Fix build issue after #7041
### How was this patch tested?
GA.
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes#7042 from turboFei/fix_build.
Closes#7041
d026bf554 [Wang, Fei] fix build
Authored-by: Wang, Fei <fwang12@ebay.com>
Signed-off-by: Wang, Fei <fwang12@ebay.com>
### Why are the changes needed?
To fix NPE.
Before, we use below method to get `metadataManager`.
```
private def metadataManager = KyuubiServer.kyuubiServer.backendService
.sessionManager.asInstanceOf[KyuubiSessionManager].metadataManager
```
But before the kyuubi server fully restarted, the `KyuubiServer.kyuubiServer` is null and might throw NPE during batch recovery phase.
For example:
```
:2025-04-23 14:06:24.040 ERROR [KyuubiSessionManager-exec-pool: Thread-231] org.apache.kyuubi.engine.KubernetesApplicationOperation: Failed to get application by label: kyuubi-unique-tag=95116703-4240-4cc1-9886-ccae3a2ac879, due to Cannot invoke "org.apache.kyuubi.server.KyuubiServer.backendService()" because the return value of "org.apache.kyuubi.server.KyuubiServer$.kyuubiServer()" is null
```
### How was this patch tested?
Existing GA.
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes#7041 from turboFei/fix_NPE.
Closes#7041
064d88707 [Wang, Fei] Fix NPE
Authored-by: Wang, Fei <fwang12@ebay.com>
Signed-off-by: Wang, Fei <fwang12@ebay.com>
### Why are the changes needed?
It is missed in #6828733d4f0901/jdbc/src/java/org/apache/hive/jdbc/HiveBaseResultSet.java (L151-L159)
### How was this patch tested?
Minor change.
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes#7038 from turboFei/fixNPE.
Closes#6828
2785e97be [Wang, Fei] Fix NPE
Authored-by: Wang, Fei <fwang12@ebay.com>
Signed-off-by: Wang, Fei <fwang12@ebay.com>
### Why are the changes needed?
This PR removes the page https://kyuubi.readthedocs.io/en/v1.10.1/client/python/pyspark.html and merges the most content into https://kyuubi.readthedocs.io/en/v1.10.1/extensions/engines/spark/jdbc-dialect.html, some original content of the latter is also modified.
The current docs are misleading, I got asked several times by users why they follow the [Kyuubi PySpark docs](https://kyuubi.readthedocs.io/en/v1.10.1/client/python/pyspark.html) to access data stored in Hive warehouse is too slow.
Actually, accessing HiveServer2/STS from Spark JDBC data source is discouraged by the Spark community, see [SPARK-47482](https://github.com/apache/spark/pull/45609), even though it's technical feasible.
### How was this patch tested?
It's a docs-only change, review is required.
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes#7036 from pan3793/jdbc-ds-docs.
Closes#7036
c00ce0706 [Cheng Pan] style
f2676bd23 [Cheng Pan] [DOCS] Improve docs for kyuubi-extension-spark-jdbc-dialect
Authored-by: Cheng Pan <chengpan@apache.org>
Signed-off-by: Cheng Pan <chengpan@apache.org>
### Why are the changes needed?
After https://github.com/apache/spark/pull/34460 (Since Spark 3.3.0), the `spark-app-name` is available.
We shall use it as the application name if it exists.
### How was this patch tested?
Minor change.
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes#7034 from turboFei/k8s_app_name.
Closes#7034
bfa88a436 [Wang, Fei] Get pod app name
Authored-by: Wang, Fei <fwang12@ebay.com>
Signed-off-by: Wang, Fei <fwang12@ebay.com>
### Why are the changes needed?
I found that, for a kyuubi batch on kubernetes.
1. It has been `FINISHED`.
2. then I delete the pod manually, then I check the k8s-audit.log, then the appState became `FAILED`.
```
2025-04-15 11:16:30.453 INFO [-675216314-pool-44-thread-839] org.apache.kyuubi.engine.KubernetesApplicationAuditLogger: label=61e7d8c1-e5a9-46cd-83e7-c611003f0224 context=97 namespace=dls-prod pod=kyuubi-spark-61e7d8c1-e5a9-46cd-83e7-c611003f0224-driver podState=Running containers=[microvault->ContainerState(running=ContainerStateRunning(startedAt=2025-04-15T18:13:48Z, additionalProperties={}), terminated=null, waiting=null, additionalProperties={}),spark-kubernetes-driver->ContainerState(running=null, terminated=ContainerStateTerminated(containerID=containerd://72704f8e7ccb5e877c8f6b10bf6ad810d0c019e07e0cb5975be733e79762c1ec, exitCode=0, finishedAt=2025-04-15T18:14:22Z, message=null, reason=Completed, signal=null, startedAt=2025-04-15T18:13:49Z, additionalProperties={}), waiting=null, additionalProperties={})] appId=spark-228c62e0dc37402bacac189d01b871e4 appState=FINISHED appError=''
:2025-04-15 11:16:30.854 INFO [-675216314-pool-44-thread-840] org.apache.kyuubi.engine.KubernetesApplicationAuditLogger: label=61e7d8c1-e5a9-46cd-83e7-c611003f0224 context=97 namespace=dls-prod pod=kyuubi-spark-61e7d8c1-e5a9-46cd-83e7-c611003f0224-driver podState=Failed containers=[microvault->ContainerState(running=null, terminated=ContainerStateTerminated(containerID=containerd://91654e3ee74e2c31218e14be201b50a4a604c2ad15d3afd84dc6f620e59894b7, exitCode=2, finishedAt=2025-04-15T18:16:30Z, message=null, reason=Error, signal=null, startedAt=2025-04-15T18:13:48Z, additionalProperties={}), waiting=null, additionalProperties={}),spark-kubernetes-driver->ContainerState(running=null, terminated=ContainerStateTerminated(containerID=containerd://72704f8e7ccb5e877c8f6b10bf6ad810d0c019e07e0cb5975be733e79762c1ec, exitCode=0, finishedAt=2025-04-15T18:14:22Z, message=null, reason=Completed, signal=null, startedAt=2025-04-15T18:13:49Z, additionalProperties={}), waiting=null, additionalProperties={})] appId=spark-228c62e0dc37402bacac189d01b871e4 appState=FAILED appError='{
```
This PR is a followup for #6690 , which ignore the container state if POD is terminated.
It is more reasonable to respect the terminated container state than terminated pod state.
### How was this patch tested?
Integration testing.
```
:2025-04-15 13:53:24.551 INFO [-1077768163-pool-36-thread-3] org.apache.kyuubi.engine.KubernetesApplicationAuditLogger: eventType=DELETE label=e0eb4580-3cfa-43bf-bdcc-efeabcabc93c context=97 namespace=dls-prod pod=kyuubi-spark-e0eb4580-3cfa-43bf-bdcc-efeabcabc93c-driver podState=Failed containers=[microvault->ContainerState(running=null, terminated=ContainerStateTerminated(containerID=containerd://66c42206730950bd422774e3c1b0f426d7879731788cea609bbfe0daab24a763, exitCode=2, finishedAt=2025-04-15T20:53:22Z, message=null, reason=Error, signal=null, startedAt=2025-04-15T20:52:00Z, additionalProperties={}), waiting=null, additionalProperties={}),spark-kubernetes-driver->ContainerState(running=null, terminated=ContainerStateTerminated(containerID=containerd://9179a73d9d9e148dcd9c13ee6cc29dc3e257f95a33609065e061866bb611cb3b, exitCode=0, finishedAt=2025-04-15T20:52:28Z, message=null, reason=Completed, signal=null, startedAt=2025-04-15T20:52:01Z, additionalProperties={}), waiting=null, additionalProperties={})] appId=spark-578df0facbfd4958a07f8d1ae79107dc appState=FINISHED appError=''
```
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes#7025 from turboFei/container_terminated.
Closes#7025Closes#6686
a3b2a5a56 [Wang, Fei] comments
4356d1bc9 [Wang, Fei] fix the app state logical
Authored-by: Wang, Fei <fwang12@ebay.com>
Signed-off-by: Wang, Fei <fwang12@ebay.com>
### Why are the changes needed?
Github has provide subtask creation & auto-linking features, and they are more advanced
### How was this patch tested?
https://github.com/apache/kyuubi/issues/7030
### Was this patch authored or co-authored using generative AI tooling?
no
Closes#7032 from yaooqinn/sb.
Closes#7032
3fd6ccd90 [Kent Yao] remove more
cbf691033 [Kent Yao] Remove Umbrella/Subtask issue template
Authored-by: Kent Yao <yao@apache.org>
Signed-off-by: Kent Yao <yao@apache.org>
### Why are the changes needed?
1. Audit the kubernetes resource event type.
2. Fix the process logical for DELETE event.
Before this pr:
I tried to delete the POD manually, then I saw that, kyuubi thought the `appState=PENDING`.
```
:2025-04-15 13:58:20.320 INFO [-1077768163-pool-36-thread-7] org.apache.kyuubi.engine.KubernetesApplicationAuditLogger: eventType=DELETE label=3c58e9fd-cf8c-4cc3-a9aa-82ae40e200d8 context=97 namespace=dls-prod pod=kyuubi-spark-3c58e9fd-cf8c-4cc3-a9aa-82ae40e200d8-driver podState=Pending containers=[] appId=spark-cd125bbd9fc84ffcae6d6b5d41d4d8ad appState=PENDING appError=''
```
It seems that, the pod status in the event is the snapshot before pod deleted.
Then we would not receive any event for this POD, and finally the batch FINISHED with application `NOT_FOUND` .
<img width="1389" alt="image" src="https://github.com/user-attachments/assets/5df03db6-0924-4a58-9538-b196fbf87f32" />
Seems we need to process the DELETE event specially.
1. get the app state from the pod/container states
2. if the applicationState got is terminated, return the applicationState directly
3. otherwise, the applicationState should be FAILED, as the pod has been deleted.
### How was this patch tested?
<img width="1614" alt="image" src="https://github.com/user-attachments/assets/11e64c6f-ad53-4485-b8d2-a351bb23e8ca" />
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes#7026 from turboFei/k8s_audit.
Closes#7026
4e5695d34 [Wang, Fei] for delete
c16757218 [Wang, Fei] audit the pod event type
Authored-by: Wang, Fei <fwang12@ebay.com>
Signed-off-by: Wang, Fei <fwang12@ebay.com>
### Why are the changes needed?
This ensure the Kyuubi server is promptly informed for any Kubernetes resource changes after startup. It is highly recommend to set it for multiple Kyuubi instances mode.
### How was this patch tested?
Existing GA and Integration testing.
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes#7027 from turboFei/k8s_client_init.
Closes#7027
393b9960a [Wang, Fei] server only
a640278c4 [Wang, Fei] refresh
Authored-by: Wang, Fei <fwang12@ebay.com>
Signed-off-by: Wang, Fei <fwang12@ebay.com>
### Why are the changes needed?
Upgrade the kubernetes client, https://github.com/fabric8io/kubernetes-client/releases/tag/v6.13.5
### How was this patch tested?
GA.
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes#7023 from turboFei/k8s_client.
Closes#7023
3e3ac634f [Wang, Fei] 6.16.5
df5aa011f [Wang, Fei] upgrade
Authored-by: Wang, Fei <fwang12@ebay.com>
Signed-off-by: Wang, Fei <fwang12@ebay.com>
### Why are the changes needed?
To satisfy ASF policy
https://lists.apache.org/thread/89jb1kp77wcv16tph8qlbf5k0fscyz9l
### How was this patch tested?
Review
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes#7022 from pan3793/ann-tmpl.
Closes#7022
7fa64b163 [Cheng Pan] Update announcement mail template to contain download links
Authored-by: Cheng Pan <chengpan@apache.org>
Signed-off-by: Cheng Pan <chengpan@apache.org>
### Why are the changes needed?
Currently, if the kyuubi session between client and kyuubi session disconnected without closing properly, it is difficult to debug, and we have to check the kyuubi server log.
It is better that, we can record such kind of information into kyuubi session event.
### How was this patch tested?
IT.
<img width="1264" alt="image" src="https://github.com/user-attachments/assets/d2c5b6d0-6298-46ec-9b73-ce648551120c" />
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes#7015 from turboFei/disconnect.
Closes#7015
c95709284 [Wang, Fei] do not post
e46521410 [Wang, Fei] nit
bca7f9b7e [Wang, Fei] post
1cf6f8f49 [Wang, Fei] disconnect
Authored-by: Wang, Fei <fwang12@ebay.com>
Signed-off-by: Wang, Fei <fwang12@ebay.com>
### Why are the changes needed?
To fix the batch kyuubi instance port is negative issue.
<img width="697" alt="image" src="https://github.com/user-attachments/assets/ef992390-8d20-44b3-8640-35496caff85d" />
It happen after I stop the kyuubi service.
We should use variable instead of function for jetty server serverUri.
After the server connector stopped, the localPort would be `-2`.

### How was this patch tested?
Existing UT.
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes#7017 from turboFei/server_port_negative.
Closes#7017
3d34c4031 [Wang, Fei] warn
e58298646 [Wang, Fei] mutable server uri
2cbaf772a [Wang, Fei] Revert "hard code the server uri"
b64d91b32 [Wang, Fei] hard code the server uri
Authored-by: Wang, Fei <fwang12@ebay.com>
Signed-off-by: Wang, Fei <fwang12@ebay.com>
### Why are the changes needed?
Since https://github.com/apache/kyuubi/pull/3618
Kyuubi server could retry opening the engine when encountering a special error.
1937dd93f9/kyuubi-server/src/main/scala/org/apache/kyuubi/session/KyuubiSessionImpl.scala (L177-L212)
The `_client` might be reset and closed.
So, we shall set `_client` after open engine session successfully, as the `client` method is a public method.
### How was this patch tested?
Existing UT.
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes#7011 from turboFei/client_ready.
Closes#7011
3ad57ee91 [Wang, Fei] fix npe
b956394fa [Wang, Fei] close internal engine client
523b48a4d [Wang, Fei] internal client
5baeedec1 [Wang, Fei] Revert "method"
84c808cfb [Wang, Fei] method
8efaa52f6 [Wang, Fei] check engine launched
Authored-by: Wang, Fei <fwang12@ebay.com>
Signed-off-by: Wang, Fei <fwang12@ebay.com>