Go to file
Tianlin Liao b817fcf76a [KYUUBI #2643] [SUB-TASK][KPIP-4] Implement kyuubi rest sdk for batch job operation
### _Why are the changes needed?_

To close #2643

### _How was this patch tested?_
- [x] Add some test cases that check the changes thoroughly including negative and positive cases if possible

- [ ] Add screenshots for manual tests if appropriate

- [x] [Run test](https://kyuubi.apache.org/docs/latest/develop_tools/testing.html#running-tests) locally before make a pull request

Closes #2723 from lightning-L/kyuubi-2643.

Closes #2643

b7c409bcf [Tianlin Liao] unify junit dependency
ae2caca33 [Tianlin Liao] implements AutoClosable for KyuubiRestClient; rename server in rest builder to spnego builder
19e078a32 [Tianlin Liao] reuse ObjectMapper
ef98f4c2a [Tianlin Liao] minor improvements
15d958acf [Tianlin Liao] unify httpclient version parameter
37032a6a9 [Tianlin Liao] fix dependency
6c97745aa [Tianlin Liao] support NONE and NOSASL basic auth type
aa0a1c011 [Tianlin Liao] add optional tag for hadoop-client-api/hadoop-client-runtime
275d1d042 [Tianlin Liao] add integration test in kyuubi-server
b407d0229 [Tianlin Liao] apiBasePath is not necessary for the client builder
db11f10eb [Tianlin Liao] add test case for http response code is not 200
361343fe0 [Tianlin Liao] add builder arguments validation
a2684de86 [Tianlin Liao] remove gson dependency; fix get log api typo
ed3498938 [Tianlin Liao] Define KyuubiRestException and remove kyuubi-common dependency
b46f0c4d3 [Tianlin Liao] add AuthScheme config in Builder
61796d782 [Tianlin Liao] fix sun.security does not exist/not visible error
de9bb26e2 [Tianlin Liao] [KYUUBI #2643] [SUB-TASK][KPIP-4] Implement kyuubi rest sdk for batch job operation

Authored-by: Tianlin Liao <tiliao@ebay.com>
Signed-off-by: Fei Wang <fwang12@ebay.com>
2022-05-29 08:40:33 +08:00
.github [KYUUBI #2754] [GA] Separate log archive name 2022-05-26 21:52:41 +08:00
.idea [KYUUBI #870] [MISC] Migrate from NetEase to Apache 2021-07-28 21:31:46 +08:00
bin [KYUUBI #2743] colorfully kyuubi logo support 2022-05-26 10:50:25 +08:00
build [KYUUBI #2346] [Improvement] Simplify FlinkProcessBuilder with java executable 2022-04-29 18:23:34 +08:00
conf [KYUUBI #2714] Log4j2 layout pattern add date 2022-05-23 03:15:44 +08:00
dev [KYUUBI #2643] [SUB-TASK][KPIP-4] Implement kyuubi rest sdk for batch job operation 2022-05-29 08:40:33 +08:00
docker [KYUUBI #2163][K8s] copy beeline-jars into docker image 2022-03-19 22:30:42 +08:00
docs [KYUUBI #2764] [DOCS] Fix tables in docs being coverd by right toc sidebar 2022-05-27 17:21:42 +08:00
extensions [KYUUBI #2752] Kyuubi Spark TPC-DS Connector - configurable catalog's name by initialize method 2022-05-26 19:14:49 +08:00
externals [KYUUBI #2768] Use the default DB passed in by session in Flink 2022-05-27 23:03:23 +08:00
integration-tests [KYUUBI #2730] [WIP][KYUUBI #2238] Support Flink 1.15 2022-05-25 11:31:19 +08:00
kyuubi-assembly [KYUUBI #2668] [SUB-TASK][KPIP-4] Rewrite the rest DTO classes in java 2022-05-17 17:59:38 +08:00
kyuubi-common [KYUUBI #2721] Implement dedicated set/get catalog/database operators 2022-05-26 18:06:30 +08:00
kyuubi-ctl [KYUUBI #2631] Rename high availability config key to support multi discovery client 2022-05-25 10:01:46 +08:00
kyuubi-events [KYUUBI #2323] Separate events to a submodule - kyuubi-event 2022-04-19 12:06:23 +08:00
kyuubi-ha [KYUUBI #2631] Rename high availability config key to support multi discovery client 2022-05-25 10:01:46 +08:00
kyuubi-hive-beeline [KYUUBI #2614] Add commons-io to beeline module since jdbc upgraded to 3.1.3 2022-05-11 11:14:06 +08:00
kyuubi-hive-jdbc [KYUUBI #2643] [SUB-TASK][KPIP-4] Implement kyuubi rest sdk for batch job operation 2022-05-29 08:40:33 +08:00
kyuubi-hive-jdbc-shaded [KYUUBI #2643] [SUB-TASK][KPIP-4] Implement kyuubi rest sdk for batch job operation 2022-05-29 08:40:33 +08:00
kyuubi-metrics [KYUUBI #2686] Fix lock bug if engine initialization timeout 2022-05-19 16:27:34 +08:00
kyuubi-rest-client [KYUUBI #2643] [SUB-TASK][KPIP-4] Implement kyuubi rest sdk for batch job operation 2022-05-29 08:40:33 +08:00
kyuubi-server [KYUUBI #2643] [SUB-TASK][KPIP-4] Implement kyuubi rest sdk for batch job operation 2022-05-29 08:40:33 +08:00
kyuubi-zookeeper [KYUUBI #2277] Inline kyuubi prefix in KyuubiConf 2022-04-02 22:33:18 +08:00
licenses [KYUUBI #2115] Update license and enhance collect_licenses script 2022-03-14 19:45:40 +08:00
licenses-binary [KYUUBI #2280] [INFRA] Replace BSD 3-clause with ASF License v2 for scala binaries 2022-04-06 19:43:06 +08:00
tools/spark-block-cleaner [KYUUBI #2045] Preparing v1.6.0-SNAPSHOT 2022-03-06 22:21:45 +08:00
.asf.yaml [KYUUBI #2042] [KYUUI #2036] Redirect Issues/PR Notifications to notifications@kyuubi.apache.org 2022-03-05 22:20:33 +08:00
.dockerignore [KYUUBI #972] [LICENSE] Exclude binary/doc files from source release 2021-08-22 16:31:36 +08:00
.gitattributes [KYUUBI #2439] Using Pure Java TPC-DS generator 2022-04-24 11:03:55 +08:00
.gitignore [KYUUBI #2420] Fix outdate .gitignore for dependency-reduced-pom.xml 2022-04-19 17:24:23 +08:00
.rat-excludes [KYUUBI #2061] Implementation of the very basic UI on current Jetty server 2022-03-09 10:11:50 +08:00
.readthedocs.yml [KYUUBI #951] [LICENSE] Add license header on all docs 2021-08-19 09:53:52 +08:00
.scalafmt.conf [KYUUBI #1383] Leverage Scalafmt to auto format scala code 2021-11-22 17:51:23 +08:00
.travis.yml [KYUUBI #2582] Minimize Travis build and test 2022-05-07 14:31:51 +08:00
codecov.yml [KYUUBI #951] [LICENSE] Add license header on all docs 2021-08-19 09:53:52 +08:00
CONTRIBUTING.md [KYUUBI #2751] [DOC] Replace sphinx_rtd_theme with sphinx_book_theme 2022-05-27 02:22:36 +08:00
DISCLAIMER [KYUUBI #1020] Fix incubating issue 2021-09-03 19:13:53 +08:00
LICENSE [KYUUBI #2115] Update license and enhance collect_licenses script 2022-03-14 19:45:40 +08:00
LICENSE-binary [KYUUBI #2560] Upgrade kyuubi-hive-jdbc hive version to 3.1.3 2022-05-09 22:05:22 +08:00
MATURITY.md [KYUUBI #2004] Sync contents for CONTRIBUTING & COMMUNITY between web and main repo 2022-03-03 20:40:33 +08:00
NOTICE [KYUUBI #1722] [NOTICE] Update NOTICE for 2022 2022-01-11 14:11:04 +08:00
NOTICE-binary [KYUUBI #1722] [NOTICE] Update NOTICE for 2022 2022-01-11 14:11:04 +08:00
pom.xml [KYUUBI #2643] [SUB-TASK][KPIP-4] Implement kyuubi rest sdk for batch job operation 2022-05-29 08:40:33 +08:00
README.md [KYUUBI #2516] [DOCS] Add Contributor over time in README.md 2022-04-29 18:26:24 +08:00
scalastyle-config.xml [KYUUBI #563] [TEST] Bump scalatest 3.2.8 2021-04-22 20:45:43 +08:00

Apache Kyuubi (Incubating)

Kyuubi logo

License Release codecov GitHub Workflow Status Travis Documentation Status GitHub top language Commit activity Average time to resolve an issue Percentage of issues still open

What is Kyuubi?

Kyuubi is a distributed multi-tenant Thrift JDBC/ODBC server for large-scale data management, processing, and analytics, built on top of Apache Spark and designed to support more engines (i.e., Flink). It has been open-sourced by NetEase since 2018. We are aiming to make Kyuubi an "out-of-the-box" tool for data warehouses and data lakes.

Kyuubi provides a pure SQL gateway through Thrift JDBC/ODBC interface for end-users to manipulate large-scale data with pre-programmed and extensible Spark SQL engines. This "out-of-the-box" model minimizes the barriers and costs for end-users to use Spark at the client side. At the server-side, Kyuubi server and engines' multi-tenant architecture provides the administrators a way to achieve computing resource isolation, data security, high availability, high client concurrency, etc.

  • A HiveServer2-like API
  • Multi-tenant Spark Support
  • Running Spark in a serverless way

Target Users

Kyuubi's goal is to make it easy and efficient for anyone to use Spark(maybe other engines soon) and facilitate users to handle big data like ordinary data. Here, anyone means that users do not need to have a Spark technical background but a human language, SQL only. Sometimes, SQL skills are unnecessary when integrating Kyuubi with Apache Superset, which supports rich visualizations and dashboards.

In typical big data production environments with Kyuubi, there should be system administrators and end-users.

  • System administrators: A small group consists of Spark experts responsible for Kyuubi deployment, configuration, and tuning.
  • End-users: Focus on business data of their own, not where it stores, how it computes.

Additionally, the Kyuubi community will continuously optimize the whole system with various features, such as History-Based Optimizer, Auto-tuning, Materialized View, SQL Dialects, Functions, e.t.c.

Usage scenarios

Port workloads from HiveServer2 to Spark SQL

In typical big data production environments, especially secured ones, all bundled services manage access control lists to restricting access to authorized users. For example, Hadoop YARN divides compute resources into queues. With Queue ACLs, it can identify and control which users/groups can take actions on particular queues. Similarly, HDFS ACLs control access of HDFS files by providing a way to set different permissions for specific users/groups.

Apache Spark is a unified analytics engine for large-scale data processing. It provides a Distributed SQL Engine, a.k.a, the Spark Thrift Server(STS), designed to be seamlessly compatible with HiveServer2 and get even better performance.

HiveServer2 can identify and authenticate a caller, and then if the caller also has permissions for the YARN queue and HDFS files, it succeeds. Otherwise, it fails. However, on the one hand, STS is a single Spark application. The user and queue to which STS belongs are uniquely determined at startup. Consequently, STS cannot leverage cluster managers such as YARN and Kubernetes for resource isolation and sharing or control the access for callers by the single user inside the whole system. On the other hand, the Thrift Server is coupled in the Spark driver's JVM process. This coupled architect puts a high risk on server stability and makes it unable to handle high client concurrency or apply high availability such as load balancing as it is stateful.

Kyuubi extends the use of STS in a multi-tenant model based on a unified interface and relies on the concept of multi-tenancy to interact with cluster managers to finally gain the ability of resources sharing/isolation and data security. The loosely coupled architecture of the Kyuubi server and engine dramatically improves the client concurrency and service stability of the service itself.

DataLake/LakeHouse Support

The vision of Kyuubi is to unify the portal and become an easy-to-use data lake management platform. Different kinds of workloads, such as ETL processing and BI analytics, can be supported by one platform, using one copy of data, with one SQL interface.

  • Logical View support via Kyuubi DataLake Metadata APIs
  • Multiple Catalogs support
  • SQL Standard Authorization support for DataLake(coming)

Cloud Native Support

Kyuubi can deploy its engines on different kinds of Cluster Managers, such as, Hadoop YARN, Kubernetes, etc.

The Kyuubi Ecosystem(present and future)

The figure below shows our vision for the Kyuubi Ecosystem. Some of them have been realized, some in development, and others would not be possible without your help.

Online Documentation

Since Kyuubi 1.3.0-incubating, the Kyuubi online documentation is hosted by https://kyuubi.apache.org/. You can find the latest Kyuubi documentation on this web page. For 1.2 and earlier versions, please check the Readthedocs directly.

Quick Start

Ready? Getting Started with Kyuubi.

Contributing

Contributor over time

Contributor over time

Aside

The project took its name from a character of a popular Japanese manga - Naruto. The character is named Kyuubi Kitsune/Kurama, which is a nine-tailed fox in mythology. Kyuubi spread the power and spirit of fire, which is used here to represent the powerful Apache Spark. Its nine tails stand for end-to-end multi-tenancy support of this project.

License

This project is licensed under the Apache 2.0 License. See the LICENSE file for details.