# 🔍 Description ## Issue References 🔗 This pull request fixes #6034 ## Describe Your Solution 🔧 Currently, use beeline to connect kyuubiServer with HA mode, the strategy only support random, this will lead to a high load on the machine. So i make this pr to support choose strategy. [description] First, we need know, beeline connect kyuubiServer dependency on kyuubi-hive-jdbc, it is isolated from the kyuubi cluster, so the code only support random choose serverHost from zk node /${namespace}. Because kyuubi-hive-jdbc is a stateless module, only run once, cannot store var about get serverHost from zk node. [Solution] This pr, we could implement a interface named ChooseServerStrategy to choose serverHost. I implement two strategy 1. poll: it will create a zk node named ${namespace}-counter, when a beeline client want connect kyuubiServer, the node will increment 1, use this value to take the remainder from serverHosts, like counter % serverHost.size, so we could get a order serverHost 2. random: random get serverHost from serverHosts 3. User Definied Class: implemented the ChooseServerStrategy, then put the jar to beeline-jars, it can use your strategy to choose serverHost ## Types of changes 🔖 - [ ] Bugfix (non-breaking change which fixes an issue) - [x] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 Test the Strategy in my test Cluster #### Behavior Without This Pull Request ⚰️    #### Behavior With This Pull Request 🎉 [Use Case] 1. poll: `bin/beeline -u 'jdbc:hive2://xxx:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi;zooKeeperStrategy=poll?spark.yarn.queue=root.kylin;spark.app.name=testspark;spark.shuffle.useOldFetchProtocol=true' -n mfw_hadoop --verbose=true --showNestedErrs=true` 2. random: `bin/beeline -u 'jdbc:hive2://xxx:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi;zooKeeperStrategy=random?spark.yarn.queue=root.kylin;spark.app.name=testspark;spark.shuffle.useOldFetchProtocol=true' -n mfw_hadoop --verbose=true --showNestedErrs=true` or `bin/beeline -u 'jdbc:hive2://xxx:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi?spark.yarn.queue=root.kylin;spark.app.name=testspark;spark.shuffle.useOldFetchProtocol=true' -n mfw_hadoop --verbose=true --showNestedErrs=true` 3. YourStrategy: `bin/beeline -u 'jdbc:hive2://xxx:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi;zooKeeperStrategy=xxx.xxx.xxx.XxxChooseServerStrategy?spark.yarn.queue=root.kylin;spark.app.name=testspark;spark.shuffle.useOldFetchProtocol=true' -n mfw_hadoop --verbose=true --showNestedErrs=true` [Result: The Cluster have two Server (221,233)] 1. poll: 1.1. zkNode: counterValue  1.2. result:     2. random:    3. YourStrategy(the test case only get the first serverHost):    #### Related Unit Tests There is no Unit Tests. --- # Checklist 📝 - [ ] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6213 from davidyuan1223/ha_zk_support_more_strategy. Closes #6034 961d3e989 [Bowen Liang] rename ServerStrategyFactory to ServerSelectStrategyFactory 353f94059 [Bowen Liang] repeat 8822ad471 [Bowen Liang] repeat 619339402 [Bowen Liang] nit e94f9e909 [Bowen Liang] nit 40f427ae5 [Bowen Liang] rename StrategyFactory to StrategyFactoryServerStrategyFactory 7668f99cc [Bowen Liang] test name e194ea62f [Bowen Liang] remove ZooKeeperHiveClientException from method signature of chooseServer 265965e5d [Bowen Liang] polling b39c56700 [Bowen Liang] style 1ab79b494 [Bowen Liang] strategyName 8f8ca28f2 [Bowen Liang] nit 228bf1091 [Bowen Liang] rename parameter zooKeeperStrategy to serverSelectStrategy 125c82358 [Bowen Liang] rename ChooseServerStrategy to ServerSelectStrategy b4aeb3dbd [Bowen Liang] repeat testing on pollingChooseStrategy 465548005 [davidyuan] update 09a84f1f9 [david yuan] remove the distirbuted lock 93f4a2699 [davidyuan] remove reset 7b0c1b811 [davidyuan] fix var not valid and counter getAndIncrement c95382a23 [davidyuan] fix var not valid and counter getAndIncrement 9ed2cac85 [david yuan] remove test comment 8eddd7682 [davidyuan] Add Strategy Unit Test Case and fix the polling strategy counter begin with 0 73952f878 [davidyuan] Kyuubi Server HA&ZK get server from serverHosts support more strategy 97b959776 [davidyuan] Kyuubi Server HA&ZK get server from serverHosts support more strategy ee5a9ad68 [davidyuan] Kyuubi Server HA&ZK get server from serverHosts support more strategy 6a0445357 [davidyuan] Kyuubi Server HA&ZK get server from serverHosts support more strategy 1892f148d [davidyuan] add common method to get session level config 7c0c6058d [yuanfuyuan] fix_4186 Lead-authored-by: Bowen Liang <liangbowen@gf.com.cn> Co-authored-by: davidyuan <yuanfuyuan@mafengwo.com> Co-authored-by: davidyuan <davidyuan1223@gmail.com> Co-authored-by: david yuan <davidyuan1223@gmail.com> Co-authored-by: yuanfuyuan <1406957364@qq.com> Signed-off-by: Bowen Liang <liangbowen@gf.com.cn> |
||
|---|---|---|
| .github | ||
| .idea | ||
| bin | ||
| build | ||
| charts/kyuubi | ||
| conf | ||
| dev | ||
| docker | ||
| docs | ||
| extensions | ||
| externals | ||
| integration-tests | ||
| kyuubi-assembly | ||
| kyuubi-common | ||
| kyuubi-ctl | ||
| kyuubi-events | ||
| kyuubi-ha | ||
| kyuubi-hive-beeline | ||
| kyuubi-hive-jdbc | ||
| kyuubi-hive-jdbc-shaded | ||
| kyuubi-metrics | ||
| kyuubi-rest-client | ||
| kyuubi-server | ||
| kyuubi-util | ||
| kyuubi-util-scala | ||
| kyuubi-zookeeper | ||
| licenses | ||
| licenses-binary | ||
| python | ||
| .asf.yaml | ||
| .dockerignore | ||
| .gitattributes | ||
| .gitignore | ||
| .rat-excludes | ||
| .readthedocs.yaml | ||
| .scalafmt.conf | ||
| codecov.yml | ||
| CONTRIBUTING.md | ||
| LICENSE | ||
| LICENSE-binary | ||
| NOTICE | ||
| NOTICE-binary | ||
| pom.xml | ||
| README.md | ||
| scalastyle-config.xml | ||
Project - Documentation - Who's using
Apache Kyuubi
Apache Kyuubi™ is a distributed and multi-tenant gateway to provide serverless SQL on data warehouses and lakehouses.
What is Kyuubi?
Kyuubi provides a pure SQL gateway through Thrift JDBC/ODBC interface for end-users to manipulate large-scale data with pre-programmed and extensible Spark SQL engines. This "out-of-the-box" model minimizes the barriers and costs for end-users to use Spark at the client side. At the server-side, Kyuubi server and engines' multi-tenant architecture provides the administrators a way to achieve computing resource isolation, data security, high availability, high client concurrency, etc.
- A HiveServer2-like API
- Multi-tenant Spark Support
- Running Spark in a serverless way
Target Users
Kyuubi's goal is to make it easy and efficient for anyone to use Spark(maybe other engines soon) and facilitate users to handle big data like ordinary data. Here, anyone means that users do not need to have a Spark technical background but a human language, SQL only. Sometimes, SQL skills are unnecessary when integrating Kyuubi with Apache Superset, which supports rich visualizations and dashboards.
In typical big data production environments with Kyuubi, there should be system administrators and end-users.
- System administrators: A small group consists of Spark experts responsible for Kyuubi deployment, configuration, and tuning.
- End-users: Focus on business data of their own, not where it stores, how it computes.
Additionally, the Kyuubi community will continuously optimize the whole system with various features, such as History-Based Optimizer, Auto-tuning, Materialized View, SQL Dialects, Functions, etc.
Usage scenarios
Port workloads from HiveServer2 to Spark SQL
In typical big data production environments, especially secured ones, all bundled services manage access control lists to restricting access to authorized users. For example, Hadoop YARN divides compute resources into queues. With Queue ACLs, it can identify and control which users/groups can take actions on particular queues. Similarly, HDFS ACLs control access of HDFS files by providing a way to set different permissions for specific users/groups.
Apache Spark is a unified analytics engine for large-scale data processing. It provides a Distributed SQL Engine, a.k.a, the Spark Thrift Server(STS), designed to be seamlessly compatible with HiveServer2 and get even better performance.
HiveServer2 can identify and authenticate a caller, and then if the caller also has permissions for the YARN queue and HDFS files, it succeeds. Otherwise, it fails. However, on the one hand, STS is a single Spark application. The user and queue to which STS belongs are uniquely determined at startup. Consequently, STS cannot leverage cluster managers such as YARN and Kubernetes for resource isolation and sharing or control the access for callers by the single user inside the whole system. On the other hand, the Thrift Server is coupled in the Spark driver's JVM process. This coupled architecture puts a high risk on server stability and makes it unable to handle high client concurrency or apply high availability such as load balancing as it is stateful.
Kyuubi extends the use of STS in a multi-tenant model based on a unified interface and relies on the concept of multi-tenancy to interact with cluster managers to finally gain the ability of resources sharing/isolation and data security. The loosely coupled architecture of the Kyuubi server and engine dramatically improves the client concurrency and service stability of the service itself.
DataLake/Lakehouse Support
The vision of Kyuubi is to unify the portal and become an easy-to-use data lake management platform. Different kinds of workloads, such as ETL processing and BI analytics, can be supported by one platform, using one copy of data, with one SQL interface.
- Logical View support via Kyuubi DataLake Metadata APIs
- Multiple Catalogs support
- SQL Standard Authorization support for DataLake(coming)
Cloud Native Support
Kyuubi can deploy its engines on different kinds of Cluster Managers, such as, Hadoop YARN, Kubernetes, etc.
The Kyuubi Ecosystem(present and future)
The figure below shows our vision for the Kyuubi Ecosystem. Some of them have been realized, some in development, and others would not be possible without your help.
Online Documentation
Quick Start
Ready? Getting Started with Kyuubi.
Contributing
Project & Community Status
Aside
The project took its name from a character of a popular Japanese manga - Naruto.
The character is named Kyuubi Kitsune/Kurama, which is a nine-tailed fox in mythology.
Kyuubi spread the power and spirit of fire, which is used here to represent the powerful Apache Spark.
Its nine tails stand for end-to-end multi-tenancy support of this project.


