Go to file
zwangsheng a0fc33c6af
[KYUUBI #3869] [K8S][IT][BUG] Fix the issue that connect conf is not used in the jdbc connection string
### _Why are the changes needed?_

Fix the issue that connect conf is not used in the Kyuubi On Kubernetes IT's jdbc connection string.

### _How was this patch tested?_
- [ ] Add some test cases that check the changes thoroughly including negative and positive cases if possible

- [ ] Add screenshots for manual tests if appropriate

- [ ] [Run test](https://kyuubi.apache.org/docs/latest/develop_tools/testing.html#running-tests) locally before make a pull request

- [x] Wait for IT ci

Closes #3869 from zwangsheng/bugfix/kyuubi_on_k8s_it_connect_conf.

Closes #3869

3cecd5f4 [zwangsheng] fix
63025a28 [zwangsheng] fix
924949f4 [zwangsheng] fix
21e93298 [zwangsheng] fix
fc5794ef [zwangsheng] fix
6dca96cd [zwangsheng] test
c2c81bb4 [zwangsheng] test
b8bb820b [zwangsheng] add unit test
dad4c739 [zwangsheng] fix bind
406f1de5 [zwangsheng] proxy for 185
c7d6ee6d [zwangsheng] merge
2290a24f [Binjie Yang] Update master.yml
9ffcb498 [Binjie Yang] Update KyuubiOnKubernetesTestsSuite.scala
714b340d [Binjie Yang] Update KyuubiOnKubernetesTestsSuite.scala
fa7fc542 [Binjie Yang] Update KyuubiOnKubernetesTestsSuite.scala
af4b9881 [zwangsheng] set 777 for /
49f705eb [zwangsheng] set 777 for test
fc66843d [zwangsheng] stop ci
e2ba0bcf [zwangsheng] add test name
7db4eab1 [zwangsheng] fast test cluster
33d490d1 [zwangsheng] add unit test
e2e12f4e [zwangsheng] fast test cluster
e8251011 [zwangsheng] test
b66468f5 [zwangsheng] test
633d99e4 [zwangsheng] change host
40ba5740 [zwangsheng] test
e393f9a5 [zwangsheng] test
532cd7df [zwangsheng] merge
4597572e [zwangsheng] test
b8fc86a1 [Binjie Yang] Update KyuubiOnKubernetesTestsSuite.scala
34be2761 [zwangsheng] TEST
a3c60e45 [zwangsheng] Changes
19e3bc22 [zwangsheng] for fast test
3ad2337f [zwangsheng] try cluster
39df2c40 [zwangsheng] try cluster
ed8f8baa [zwangsheng] fix client
7f711acb [zwangsheng] fix
b034731e [zwangsheng] fix
d646f4ac [zwangsheng] fix
2b9591c4 [zwangsheng] debug
a977d907 [zwangsheng] fix
0c3486fa [zwangsheng] debug
f0a0304b [zwangsheng] Add serviceAccount
eb3424ab [zwangsheng] fix user
cac7e69d [zwangsheng] proxy user
2886520f [zwangsheng] debug
25a677c6 [zwangsheng] debug
9f201d89 [zwangsheng] debug
e533664d [zwangsheng] fix it test
d9bf9173 [zwangsheng] fix it test

Lead-authored-by: zwangsheng <2213335496@qq.com>
Co-authored-by: Binjie Yang <52876270+zwangsheng@users.noreply.github.com>
Signed-off-by: Cheng Pan <chengpan@apache.org>
2022-12-06 10:24:52 +08:00
.github [KYUUBI #3869] [K8S][IT][BUG] Fix the issue that connect conf is not used in the jdbc connection string 2022-12-06 10:24:52 +08:00
.idea
bin [KYUUBI #3679] Admin command line supports delete/list engine operation 2022-10-24 11:30:26 +08:00
build [KYUUBI #3903] Support windows generate kyuubi-version-info.properties 2022-12-05 20:27:43 +08:00
conf [KYUUBI #2850][FOLLOWUP] Fix default log4j2 configuration 2022-07-04 13:51:34 +08:00
dev [KYUUBI #3842] [Improvement] Support maven pom.xml code style check with spotless plugin 2022-11-23 22:08:00 +08:00
docker [KYUUBI #3881] Upgrade playground components to latest versions 2022-12-02 10:51:41 +08:00
docs [KYUUBI #3897] Supplying pluggable GroupProvider 2022-12-05 19:30:33 +08:00
extensions [KYUUBI #3897] Supplying pluggable GroupProvider 2022-12-05 19:30:33 +08:00
externals [KYUUBI #3884] Execute scala code supports asynchronous and query timeout 2022-12-06 08:10:45 +08:00
integration-tests [KYUUBI #3869] [K8S][IT][BUG] Fix the issue that connect conf is not used in the jdbc connection string 2022-12-06 10:24:52 +08:00
kyuubi-assembly [KYUUBI #3842] [Improvement] Support maven pom.xml code style check with spotless plugin 2022-11-23 22:08:00 +08:00
kyuubi-common [KYUUBI #3903] Support windows generate kyuubi-version-info.properties 2022-12-05 20:27:43 +08:00
kyuubi-ctl [KYUUBI #3842] [Improvement] Support maven pom.xml code style check with spotless plugin 2022-11-23 22:08:00 +08:00
kyuubi-events [KYUUBI #3842] [Improvement] Support maven pom.xml code style check with spotless plugin 2022-11-23 22:08:00 +08:00
kyuubi-ha [KYUUBI #3888] [HA] Add more info to log when zk auth keytab not found 2022-12-02 16:09:24 +08:00
kyuubi-hive-beeline [KYUUBI #3842] [Improvement] Support maven pom.xml code style check with spotless plugin 2022-11-23 22:08:00 +08:00
kyuubi-hive-jdbc [KYUUBI #3803] Improve the data interchange performance via Arrow serialization/deserialization 2022-11-28 16:07:37 +08:00
kyuubi-hive-jdbc-shaded [KYUUBI #3803] Improve the data interchange performance via Arrow serialization/deserialization 2022-11-28 16:07:37 +08:00
kyuubi-metrics [KYUUBI #3842] [Improvement] Support maven pom.xml code style check with spotless plugin 2022-11-23 22:08:00 +08:00
kyuubi-rest-client [KYUUBI #3876] Return kyuubi instance when opening interactive session via SessionsResource 2022-11-29 22:13:48 +08:00
kyuubi-server [KYUUBI #3897] Supplying pluggable GroupProvider 2022-12-05 19:30:33 +08:00
kyuubi-zookeeper [KYUUBI #3842] [Improvement] Support maven pom.xml code style check with spotless plugin 2022-11-23 22:08:00 +08:00
licenses
licenses-binary [KYUUBI #3254] Supplement the licenses of support etcd discovery 2022-08-23 20:08:30 +08:00
tools/spark-block-cleaner [KYUUBI #3842] [Improvement] Support maven pom.xml code style check with spotless plugin 2022-11-23 22:08:00 +08:00
.asf.yaml [KYUUBI #3631] Update project description 2022-10-14 21:36:54 +08:00
.dockerignore
.gitattributes
.gitignore [KYUUBI #3613] Git ignore VSCode setting folder 2022-10-12 18:09:27 +08:00
.rat-excludes [KYUUBI #3338] [Subtask] [KPIP-5] Add node_modules/** to .rat-excludes 2022-08-25 16:50:21 +08:00
.readthedocs.yml
.scalafmt.conf [KYUUBI #3840] [Improvement] Bump scalafmt to 3.6.1 and spotless maven plugin to 2.72.2 2022-11-23 15:23:56 +08:00
.travis.yml [KYUUBI #3199] [BUILD] Fix travis JAVA_HOME 2022-08-08 20:31:42 +08:00
codecov.yml
CONTRIBUTING.md [KYUUBI #3266] Update CONTRIBUTE.md 2022-08-18 17:48:01 +08:00
DISCLAIMER
LICENSE
LICENSE-binary [KYUUBI #3813] [BUG] Caused by: java.lang.ClassNotFoundException: com.google.common.util.concurrent.internal.InternalFutureFailureAccess 2022-11-16 17:24:23 +08:00
MATURITY.md [KYUUBI #3743] [COMMUNITY] Update maturity model 2022-11-06 16:47:42 +00:00
NOTICE
NOTICE-binary [KYUUBI #3254] Supplement the licenses of support etcd discovery 2022-08-23 20:08:30 +08:00
pom.xml [KYUUBI #3855] [Improvement] Set Java version to maven.compiler.release property 2022-12-04 05:06:55 +08:00
README.md [KYUUBI #2516] [DOCS] Add Contributor over time in README.md 2022-04-29 18:26:24 +08:00
scalastyle-config.xml

Apache Kyuubi (Incubating)

Kyuubi logo

License Release codecov GitHub Workflow Status Travis Documentation Status GitHub top language Commit activity Average time to resolve an issue Percentage of issues still open

What is Kyuubi?

Kyuubi is a distributed multi-tenant Thrift JDBC/ODBC server for large-scale data management, processing, and analytics, built on top of Apache Spark and designed to support more engines (i.e., Flink). It has been open-sourced by NetEase since 2018. We are aiming to make Kyuubi an "out-of-the-box" tool for data warehouses and data lakes.

Kyuubi provides a pure SQL gateway through Thrift JDBC/ODBC interface for end-users to manipulate large-scale data with pre-programmed and extensible Spark SQL engines. This "out-of-the-box" model minimizes the barriers and costs for end-users to use Spark at the client side. At the server-side, Kyuubi server and engines' multi-tenant architecture provides the administrators a way to achieve computing resource isolation, data security, high availability, high client concurrency, etc.

  • A HiveServer2-like API
  • Multi-tenant Spark Support
  • Running Spark in a serverless way

Target Users

Kyuubi's goal is to make it easy and efficient for anyone to use Spark(maybe other engines soon) and facilitate users to handle big data like ordinary data. Here, anyone means that users do not need to have a Spark technical background but a human language, SQL only. Sometimes, SQL skills are unnecessary when integrating Kyuubi with Apache Superset, which supports rich visualizations and dashboards.

In typical big data production environments with Kyuubi, there should be system administrators and end-users.

  • System administrators: A small group consists of Spark experts responsible for Kyuubi deployment, configuration, and tuning.
  • End-users: Focus on business data of their own, not where it stores, how it computes.

Additionally, the Kyuubi community will continuously optimize the whole system with various features, such as History-Based Optimizer, Auto-tuning, Materialized View, SQL Dialects, Functions, e.t.c.

Usage scenarios

Port workloads from HiveServer2 to Spark SQL

In typical big data production environments, especially secured ones, all bundled services manage access control lists to restricting access to authorized users. For example, Hadoop YARN divides compute resources into queues. With Queue ACLs, it can identify and control which users/groups can take actions on particular queues. Similarly, HDFS ACLs control access of HDFS files by providing a way to set different permissions for specific users/groups.

Apache Spark is a unified analytics engine for large-scale data processing. It provides a Distributed SQL Engine, a.k.a, the Spark Thrift Server(STS), designed to be seamlessly compatible with HiveServer2 and get even better performance.

HiveServer2 can identify and authenticate a caller, and then if the caller also has permissions for the YARN queue and HDFS files, it succeeds. Otherwise, it fails. However, on the one hand, STS is a single Spark application. The user and queue to which STS belongs are uniquely determined at startup. Consequently, STS cannot leverage cluster managers such as YARN and Kubernetes for resource isolation and sharing or control the access for callers by the single user inside the whole system. On the other hand, the Thrift Server is coupled in the Spark driver's JVM process. This coupled architect puts a high risk on server stability and makes it unable to handle high client concurrency or apply high availability such as load balancing as it is stateful.

Kyuubi extends the use of STS in a multi-tenant model based on a unified interface and relies on the concept of multi-tenancy to interact with cluster managers to finally gain the ability of resources sharing/isolation and data security. The loosely coupled architecture of the Kyuubi server and engine dramatically improves the client concurrency and service stability of the service itself.

DataLake/LakeHouse Support

The vision of Kyuubi is to unify the portal and become an easy-to-use data lake management platform. Different kinds of workloads, such as ETL processing and BI analytics, can be supported by one platform, using one copy of data, with one SQL interface.

  • Logical View support via Kyuubi DataLake Metadata APIs
  • Multiple Catalogs support
  • SQL Standard Authorization support for DataLake(coming)

Cloud Native Support

Kyuubi can deploy its engines on different kinds of Cluster Managers, such as, Hadoop YARN, Kubernetes, etc.

The Kyuubi Ecosystem(present and future)

The figure below shows our vision for the Kyuubi Ecosystem. Some of them have been realized, some in development, and others would not be possible without your help.

Online Documentation

Since Kyuubi 1.3.0-incubating, the Kyuubi online documentation is hosted by https://kyuubi.apache.org/. You can find the latest Kyuubi documentation on this web page. For 1.2 and earlier versions, please check the Readthedocs directly.

Quick Start

Ready? Getting Started with Kyuubi.

Contributing

Contributor over time

Contributor over time

Aside

The project took its name from a character of a popular Japanese manga - Naruto. The character is named Kyuubi Kitsune/Kurama, which is a nine-tailed fox in mythology. Kyuubi spread the power and spirit of fire, which is used here to represent the powerful Apache Spark. Its nine tails stand for end-to-end multi-tenancy support of this project.

License

This project is licensed under the Apache 2.0 License. See the LICENSE file for details.