Go to file
Cheng Pan 9be0c65fe9
[KYUUBI #5783] Switch to kyuubi-relocated-hive-service-rpc
# 🔍 Description
## Issue References 🔗

TL;DR there are some issues with shading Thrift RPC classes during the engine packaging phase, see details in the PR description of https://github.com/apache/kyuubi-shaded/pull/20.

## Describe Your Solution 🔧

This PR aims to migrate from vanilla `hive-service-rpc`, `libfb303`, `libthrift` to `kyuubi-relocated-hive-service-rpc` introduced in https://github.com/apache/kyuubi-shaded/pull/20, the detailed works are:

- replace imported deps in `pom.xml` and rename the package prefix in all modules, except for
  - `kyuubi-server` there are a few places use vanilla thrift classes to access HMS to get token
  - `kyuubi-hive-sql-engine` Hive method invocation
- update relocations rules in modules that creates shaded jar
- introduce `HiveRpcUtils` in `kyuubi-hive-sql-engine` module for object conversion.

As part of the whole change, this PR upgrades from the Kyuubi Shaded 0.1.0 to 0.2.0, which changes the jars name. see https://kyuubi.apache.org/shaded-release/0.2.0.html

## Types of changes 🔖

- [x] Bugfix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to change)

## Test Plan 🧪

#### Behavior Without This Pull Request ⚰️

#### Behavior With This Pull Request 🎉

#### Related Unit Tests

Pass all Hive UT with Hive 3.1.3, and IT with Hive 3.1.3 and 2.3.9 (also tested with 2.1.1-cdh6.3.2)

---

# Checklists
## 📝 Author Self Checklist

- [ ] My code follows the [style guidelines](https://kyuubi.readthedocs.io/en/master/contributing/code/style.html) of this project
- [x] I have performed a self-review
- [x] I have commented my code, particularly in hard-to-understand areas
- [ ] I have made corresponding changes to the documentation
- [x] My changes generate no new warnings
- [ ] I have added tests that prove my fix is effective or that my feature works
- [x] New and existing unit tests pass locally with my changes
- [x] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html)

## 📝 Committer Pre-Merge Checklist

- [x] Pull request title is okay.
- [x] No license issues.
- [x] Milestone correctly set?
- [x] Test coverage is ok
- [x] Assignees are selected.
- [x] Minimum number of approvals
- [x] No changes are requested

**Be nice. Be informative.**

Closes #5783 from pan3793/rpc-shaded.

Closes #5783

b45d4deaa [Cheng Pan] remove staging repo
890076a20 [Cheng Pan] Kyuubi Shaded 0.2.0 RC0
071945d45 [Cheng Pan] Rebase
199794ed9 [Cheng Pan] fix
fc128b170 [Cheng Pan] fix
26d313896 [Cheng Pan] fix
632984c92 [Cheng Pan] fix
428305589 [Cheng Pan] fix
6301e28fd [Cheng Pan] fix
955cdb33b [Cheng Pan] Switch to kyuubi-shaded-hive-service-rpc

Authored-by: Cheng Pan <chengpan@apache.org>
Signed-off-by: Cheng Pan <chengpan@apache.org>
2023-12-07 19:55:10 +08:00
.github [KYUUBI #5800] [KYUUBI#5467] Integrate Intel Gluten with Spark engine 2023-12-07 10:47:00 +08:00
.idea [KYUUBI #5252] [MINOR] Remove incubator link 2023-09-05 14:01:40 +08:00
bin [KYUUBI #4849] Open modules to enable JDK 17 support 2023-05-22 09:53:49 +08:00
build [KYUUBI #5637] [INFRA] Add known contributor translation 2023-11-08 09:42:48 +08:00
charts/kyuubi [KYUUBI #5631] [K8S][HELM] Set session affinity if needed in helm chart 2023-11-22 12:14:27 +08:00
conf [KYUUBI #5729] Use G1GC as Java option example in kyuubi-env template 2023-11-22 15:44:14 +08:00
dev [KYUUBI #5395] Bump netty from 4.1.93.Final to 4.1.100.Final 2023-11-13 19:40:56 +08:00
docker [KYUUBI #5640] Upgrade playground to Kyuubi 1.8.0 and Spark 3.4.1 2023-11-08 10:44:20 +08:00
docs [KYUUBI #5795][K8S] Support to cleanup the spark driver pod periodically 2023-12-07 13:55:16 +08:00
extensions [KYUUBI #5827][TEST] Fix wrong test code about directory lineage 2023-12-07 18:07:13 +08:00
externals [KYUUBI #5783] Switch to kyuubi-relocated-hive-service-rpc 2023-12-07 19:55:10 +08:00
integration-tests [KYUUBI #5783] Switch to kyuubi-relocated-hive-service-rpc 2023-12-07 19:55:10 +08:00
kyuubi-assembly [KYUUBI #5783] Switch to kyuubi-relocated-hive-service-rpc 2023-12-07 19:55:10 +08:00
kyuubi-common [KYUUBI #5783] Switch to kyuubi-relocated-hive-service-rpc 2023-12-07 19:55:10 +08:00
kyuubi-ctl [KYUUBI #5783] Switch to kyuubi-relocated-hive-service-rpc 2023-12-07 19:55:10 +08:00
kyuubi-events [KYUUBI #5365] Don't use Log4j2's extended throwable conversion pattern in default logging configurations 2023-10-11 21:41:22 +08:00
kyuubi-ha [KYUUBI #5783] Switch to kyuubi-relocated-hive-service-rpc 2023-12-07 19:55:10 +08:00
kyuubi-hive-beeline [KYUUBI #5783] Switch to kyuubi-relocated-hive-service-rpc 2023-12-07 19:55:10 +08:00
kyuubi-hive-jdbc [KYUUBI #5783] Switch to kyuubi-relocated-hive-service-rpc 2023-12-07 19:55:10 +08:00
kyuubi-hive-jdbc-shaded [KYUUBI #5783] Switch to kyuubi-relocated-hive-service-rpc 2023-12-07 19:55:10 +08:00
kyuubi-metrics [KYUUBI #5381] Change the default metrics reporter to Prometheus 2023-10-16 11:46:56 +08:00
kyuubi-rest-client [KYUUBI #5717] Infer the proxy user automatically for delete batch operation 2023-11-17 20:52:40 +08:00
kyuubi-server [KYUUBI #5783] Switch to kyuubi-relocated-hive-service-rpc 2023-12-07 19:55:10 +08:00
kyuubi-util Bump 1.9.0-SNAPSHOT 2023-09-04 14:23:12 +08:00
kyuubi-util-scala [KYUUBI #5800] [KYUUBI#5467] Integrate Intel Gluten with Spark engine 2023-12-07 10:47:00 +08:00
kyuubi-zookeeper [KYUUBI #5783] Switch to kyuubi-relocated-hive-service-rpc 2023-12-07 19:55:10 +08:00
licenses
licenses-binary [KYUUBI #4152] Enhance LDAP authentication 2023-02-03 05:48:02 +00:00
tools/spark-block-cleaner Bump 1.9.0-SNAPSHOT 2023-09-04 14:23:12 +08:00
.asf.yaml [KYUUBI #5342] Add label hacktoberfest to project 2023-09-28 12:13:16 +08:00
.dockerignore
.gitattributes [KYUUBI #5335] Set markdown file EOL 2023-09-27 22:02:09 +08:00
.gitignore [KYUUBI #5637] [INFRA] Add known contributor translation 2023-11-08 09:42:48 +08:00
.rat-excludes [KYUUBI #5484] Remove legacy Web UI 2023-10-25 13:36:00 +08:00
.readthedocs.yaml [KYUUBI #4800] Update readthedocs.yaml 2023-05-08 13:35:55 +08:00
.scalafmt.conf [KYUUBI #5007] Bump Scalafmt from 3.7.4 to 3.7.5 2023-06-30 11:34:36 +08:00
codecov.yml [KYUUBI #5501] Update codecov token and fix codecov reporting on PRs 2023-10-26 14:57:36 +08:00
CONTRIBUTING.md [KYUUBI #5146] [DOC] Fix link of IntelliJ IDEA Setup Guide 2023-08-09 16:39:44 +08:00
LICENSE [KYUUBI #5484] Remove legacy Web UI 2023-10-25 13:36:00 +08:00
LICENSE-binary [KYUUBI #5671] Bump axios from 0.27.2 to 1.6.0 in /kyuubi-server/web-ui 2023-11-13 19:32:54 +08:00
NOTICE
NOTICE-binary [KYUUBI #4852] Switch to Kyuubi Shaded Zookeeper 2023-05-21 20:49:00 +08:00
pom.xml [KYUUBI #5783] Switch to kyuubi-relocated-hive-service-rpc 2023-12-07 19:55:10 +08:00
README.md [KYUUBI #5432] Fix typo in README.md 2023-10-16 22:03:57 +08:00
scalastyle-config.xml [KYUUBI #4852] Switch to Kyuubi Shaded Zookeeper 2023-05-21 20:49:00 +08:00

Kyuubi logo

Project - Documentation - Who's using

Apache Kyuubi

Apache Kyuubi™ is a distributed and multi-tenant gateway to provide serverless SQL on data warehouses and lakehouses.

What is Kyuubi?

Kyuubi provides a pure SQL gateway through Thrift JDBC/ODBC interface for end-users to manipulate large-scale data with pre-programmed and extensible Spark SQL engines. This "out-of-the-box" model minimizes the barriers and costs for end-users to use Spark at the client side. At the server-side, Kyuubi server and engines' multi-tenant architecture provides the administrators a way to achieve computing resource isolation, data security, high availability, high client concurrency, etc.

  • A HiveServer2-like API
  • Multi-tenant Spark Support
  • Running Spark in a serverless way

Target Users

Kyuubi's goal is to make it easy and efficient for anyone to use Spark(maybe other engines soon) and facilitate users to handle big data like ordinary data. Here, anyone means that users do not need to have a Spark technical background but a human language, SQL only. Sometimes, SQL skills are unnecessary when integrating Kyuubi with Apache Superset, which supports rich visualizations and dashboards.

In typical big data production environments with Kyuubi, there should be system administrators and end-users.

  • System administrators: A small group consists of Spark experts responsible for Kyuubi deployment, configuration, and tuning.
  • End-users: Focus on business data of their own, not where it stores, how it computes.

Additionally, the Kyuubi community will continuously optimize the whole system with various features, such as History-Based Optimizer, Auto-tuning, Materialized View, SQL Dialects, Functions, etc.

Usage scenarios

Port workloads from HiveServer2 to Spark SQL

In typical big data production environments, especially secured ones, all bundled services manage access control lists to restricting access to authorized users. For example, Hadoop YARN divides compute resources into queues. With Queue ACLs, it can identify and control which users/groups can take actions on particular queues. Similarly, HDFS ACLs control access of HDFS files by providing a way to set different permissions for specific users/groups.

Apache Spark is a unified analytics engine for large-scale data processing. It provides a Distributed SQL Engine, a.k.a, the Spark Thrift Server(STS), designed to be seamlessly compatible with HiveServer2 and get even better performance.

HiveServer2 can identify and authenticate a caller, and then if the caller also has permissions for the YARN queue and HDFS files, it succeeds. Otherwise, it fails. However, on the one hand, STS is a single Spark application. The user and queue to which STS belongs are uniquely determined at startup. Consequently, STS cannot leverage cluster managers such as YARN and Kubernetes for resource isolation and sharing or control the access for callers by the single user inside the whole system. On the other hand, the Thrift Server is coupled in the Spark driver's JVM process. This coupled architecture puts a high risk on server stability and makes it unable to handle high client concurrency or apply high availability such as load balancing as it is stateful.

Kyuubi extends the use of STS in a multi-tenant model based on a unified interface and relies on the concept of multi-tenancy to interact with cluster managers to finally gain the ability of resources sharing/isolation and data security. The loosely coupled architecture of the Kyuubi server and engine dramatically improves the client concurrency and service stability of the service itself.

DataLake/Lakehouse Support

The vision of Kyuubi is to unify the portal and become an easy-to-use data lake management platform. Different kinds of workloads, such as ETL processing and BI analytics, can be supported by one platform, using one copy of data, with one SQL interface.

  • Logical View support via Kyuubi DataLake Metadata APIs
  • Multiple Catalogs support
  • SQL Standard Authorization support for DataLake(coming)

Cloud Native Support

Kyuubi can deploy its engines on different kinds of Cluster Managers, such as, Hadoop YARN, Kubernetes, etc.

The Kyuubi Ecosystem(present and future)

The figure below shows our vision for the Kyuubi Ecosystem. Some of them have been realized, some in development, and others would not be possible without your help.

Online Documentation Documentation Status

Quick Start

Ready? Getting Started with Kyuubi.

Contributing

Project & Community Status

Aside

The project took its name from a character of a popular Japanese manga - Naruto. The character is named Kyuubi Kitsune/Kurama, which is a nine-tailed fox in mythology. Kyuubi spread the power and spirit of fire, which is used here to represent the powerful Apache Spark. Its nine tails stand for end-to-end multi-tenancy support of this project.