Go to file
fwang12 94c72734ca [KYUUBI #4767] Correct the submit time for BatchJobSubmission and check applicationInfo if submitted application
### _Why are the changes needed?_

- if the kyuubi instance is unreachable, we should not transfer the  batch metadata create time as batch submit time
  - we should always wait the kyuubi instance recovery
  - here using a fake submit time to prevent that the batch be marked  as terminated if application state is NOT_FOUND
- Inside the BatchJobSubmission, using the first get application info time as batch submit time.

In this pr, I also record whether the batch operation submit batch.

If it did submit batch application and the app is not started, we need to mark the batch state as ERROR.
### _How was this patch tested?_
- [ ] Add some test cases that check the changes thoroughly including negative and positive cases if possible

- [ ] Add screenshots for manual tests if appropriate

- [x] [Run test](https://kyuubi.readthedocs.io/en/master/develop_tools/testing.html#running-tests) locally before make a pull request

Closes #4767 from turboFei/submit_time.

Closes #4767

9d4df0f91 [fwang12] save
3e56a39cb [fwang12] runtime exception -> kyuubi exception
5cac15ec5 [fwang12] nit
92d5000be [fwang12] nit
3678f8f2c [fwang12] wait the app to monitor
d51fb2636 [fwang12] save
708ad20ce [fwang12] refactor
98d49c64e [fwang12] wait
1adbefd59 [fwang12] revert
f3e4f2a11 [fwang12] wait app id ready before monitoring
a3bfe6f56 [fwang12] check app started
7530b5118 [fwang12] check submit app and final state
a41e81d0e [fwang12] refactor
e4217da03 [fwang12] _app start time
3d1e8f022 [fwang12] fake submit timeout
06c8f0a22 [fwang12] correct submit time

Authored-by: fwang12 <fwang12@ebay.com>
Signed-off-by: fwang12 <fwang12@ebay.com>
2023-04-27 10:25:01 +08:00
.github [KYUUBI #4741] Kyuubi Spark Engine/TPC connectors support Spark 3.4 2023-04-23 20:17:20 +08:00
.idea
bin [KYUUBI #4494] bin/kyuubi should use exec to run Kyuubi server 2023-03-11 00:19:04 +08:00
build [KYUUBI #4278] Use new Apache 'closer.lua' syntax to obtain Maven 2023-04-13 23:26:15 +08:00
charts/kyuubi [KYUUBI #4698] [K8S][HELM] Centralize Kyuubi labels definition 2023-04-17 23:03:49 +08:00
conf [KYUUBI #4483] Enable REST frontend protocol by default 2023-03-09 02:15:47 +08:00
dev [KYUUBI #4772] Bump Jersey from 2.39 to 2.39.1 2023-04-25 22:22:16 +08:00
docker [KYUUBI #4601] Bump Hadoop 3.3.5 for playground 2023-03-26 18:44:37 +08:00
docs [KYUUBI #4763] [DOCS] Fix the Kyuubi JDBC kerberos parameters 2023-04-25 02:16:28 +08:00
extensions [KYUUBI #4741] Kyuubi Spark Engine/TPC connectors support Spark 3.4 2023-04-23 20:17:20 +08:00
externals [KYUUBI #4710][ARROW][FOLLOWUP] Post driver-side metrics for LocalTableScanExec/CommandResultExec 2023-04-26 17:59:24 +08:00
integration-tests [KYUUBI #4681][Engine] Set thread CreateSparkTimeoutChecker daemon 2023-04-14 16:52:54 +08:00
kyuubi-assembly Bump 1.8.0-SNAPSHOT 2023-02-10 15:25:49 +08:00
kyuubi-common [KYUUBI #4753] [KYUUBI 4752][Improvement] KyuubiConf.unset should not log deprecation warning 2023-04-24 09:47:15 +08:00
kyuubi-ctl [KYUUBI #4657] Building rest client to kyuubi instance including original host urls 2023-04-04 16:19:45 +08:00
kyuubi-events Bump 1.8.0-SNAPSHOT 2023-02-10 15:25:49 +08:00
kyuubi-ha [KYUUBI #4652][FOLLOWUP] Fix JaasConfiguration ClassNotFoundException for Hadoop 3.3.4 and previous 2023-04-25 11:31:55 +08:00
kyuubi-hive-beeline [KYUUBI #3887][FOLLOWUP] Fix kyuubiServerPrincipal logic in KyuubiCommands 2023-04-25 02:17:38 +08:00
kyuubi-hive-jdbc [KYUUBI #4325] Support replace preparedStatement for Trino-jdbc 2023-03-30 20:37:55 +08:00
kyuubi-hive-jdbc-shaded Bump 1.8.0-SNAPSHOT 2023-02-10 15:25:49 +08:00
kyuubi-metrics [KYUUBI #4563] [Improvement] Format error log output in case of metrics json file not found 2023-03-23 18:01:16 +08:00
kyuubi-rest-client [KYUUBI #3653][REST] AdminResource add list kyuubi server api 2023-04-18 13:44:23 +08:00
kyuubi-server [KYUUBI #4767] Correct the submit time for BatchJobSubmission and check applicationInfo if submitted application 2023-04-27 10:25:01 +08:00
kyuubi-zookeeper Bump 1.8.0-SNAPSHOT 2023-02-10 15:25:49 +08:00
licenses [KYUUBI #2115] Update license and enhance collect_licenses script 2022-03-14 19:45:40 +08:00
licenses-binary [KYUUBI #4152] Enhance LDAP authentication 2023-02-03 05:48:02 +00:00
tools/spark-block-cleaner [KYUUBI #4295] Introduce super-linter action for linting JSON, XML, ENV files and bash_exec 2023-03-18 21:54:45 +08:00
.asf.yaml [KYUUBI #3631] Update project description 2022-10-14 21:36:54 +08:00
.dockerignore
.gitattributes [KYUUBI #4287] [INFRA] Remove Travis 2023-02-09 19:50:07 +08:00
.gitignore Revert "[KYUUBI #4274] [INFRA] Introduce mvnd to speed up CI jobs of Dependency, Licence and Style Check" 2023-04-12 20:13:38 +08:00
.rat-excludes Revert "[KYUUBI #4274] [INFRA] Introduce mvnd to speed up CI jobs of Dependency, Licence and Style Check" 2023-04-12 20:13:38 +08:00
.readthedocs.yml
.scalafmt.conf [KYUUBI #4696] Upgrade scalafmt from 3.7.1 to 3.7.3 2023-04-14 11:58:19 +08:00
codecov.yml
CONTRIBUTING.md [KYUUBI #4226] Fix word spelling typos in docs 2023-02-02 11:43:03 +08:00
LICENSE [KYUUBI #2115] Update license and enhance collect_licenses script 2022-03-14 19:45:40 +08:00
LICENSE-binary [KYUUBI #3646][UI] Init Session Statistic Page 2023-03-24 11:43:15 +08:00
NOTICE [KYUUBI #4020] remove incubating from kyuubi source code 2023-01-04 09:43:20 +08:00
NOTICE-binary [KYUUBI #4020] remove incubating from kyuubi source code 2023-01-04 09:43:20 +08:00
pom.xml [KYUUBI #4772] Bump Jersey from 2.39 to 2.39.1 2023-04-25 22:22:16 +08:00
README.md [KYUUBI #4737] Restore Project & Community Status in README.md 2023-04-20 17:44:15 +08:00
scalastyle-config.xml

Kyuubi logo

Project - Documentation - Who's using

Apache Kyuubi

Apache Kyuubi™ is a distributed and multi-tenant gateway to provide serverless SQL on data warehouses and lakehouses.

What is Kyuubi?

Kyuubi provides a pure SQL gateway through Thrift JDBC/ODBC interface for end-users to manipulate large-scale data with pre-programmed and extensible Spark SQL engines. This "out-of-the-box" model minimizes the barriers and costs for end-users to use Spark at the client side. At the server-side, Kyuubi server and engines' multi-tenant architecture provides the administrators a way to achieve computing resource isolation, data security, high availability, high client concurrency, etc.

  • A HiveServer2-like API
  • Multi-tenant Spark Support
  • Running Spark in a serverless way

Target Users

Kyuubi's goal is to make it easy and efficient for anyone to use Spark(maybe other engines soon) and facilitate users to handle big data like ordinary data. Here, anyone means that users do not need to have a Spark technical background but a human language, SQL only. Sometimes, SQL skills are unnecessary when integrating Kyuubi with Apache Superset, which supports rich visualizations and dashboards.

In typical big data production environments with Kyuubi, there should be system administrators and end-users.

  • System administrators: A small group consists of Spark experts responsible for Kyuubi deployment, configuration, and tuning.
  • End-users: Focus on business data of their own, not where it stores, how it computes.

Additionally, the Kyuubi community will continuously optimize the whole system with various features, such as History-Based Optimizer, Auto-tuning, Materialized View, SQL Dialects, Functions, e.t.c.

Usage scenarios

Port workloads from HiveServer2 to Spark SQL

In typical big data production environments, especially secured ones, all bundled services manage access control lists to restricting access to authorized users. For example, Hadoop YARN divides compute resources into queues. With Queue ACLs, it can identify and control which users/groups can take actions on particular queues. Similarly, HDFS ACLs control access of HDFS files by providing a way to set different permissions for specific users/groups.

Apache Spark is a unified analytics engine for large-scale data processing. It provides a Distributed SQL Engine, a.k.a, the Spark Thrift Server(STS), designed to be seamlessly compatible with HiveServer2 and get even better performance.

HiveServer2 can identify and authenticate a caller, and then if the caller also has permissions for the YARN queue and HDFS files, it succeeds. Otherwise, it fails. However, on the one hand, STS is a single Spark application. The user and queue to which STS belongs are uniquely determined at startup. Consequently, STS cannot leverage cluster managers such as YARN and Kubernetes for resource isolation and sharing or control the access for callers by the single user inside the whole system. On the other hand, the Thrift Server is coupled in the Spark driver's JVM process. This coupled architecture puts a high risk on server stability and makes it unable to handle high client concurrency or apply high availability such as load balancing as it is stateful.

Kyuubi extends the use of STS in a multi-tenant model based on a unified interface and relies on the concept of multi-tenancy to interact with cluster managers to finally gain the ability of resources sharing/isolation and data security. The loosely coupled architecture of the Kyuubi server and engine dramatically improves the client concurrency and service stability of the service itself.

DataLake/Lakehouse Support

The vision of Kyuubi is to unify the portal and become an easy-to-use data lake management platform. Different kinds of workloads, such as ETL processing and BI analytics, can be supported by one platform, using one copy of data, with one SQL interface.

  • Logical View support via Kyuubi DataLake Metadata APIs
  • Multiple Catalogs support
  • SQL Standard Authorization support for DataLake(coming)

Cloud Native Support

Kyuubi can deploy its engines on different kinds of Cluster Managers, such as, Hadoop YARN, Kubernetes, etc.

The Kyuubi Ecosystem(present and future)

The figure below shows our vision for the Kyuubi Ecosystem. Some of them have been realized, some in development, and others would not be possible without your help.

Online Documentation Documentation Status

Quick Start

Ready? Getting Started with Kyuubi.

Contributing

Project & Community Status

Aside

The project took its name from a character of a popular Japanese manga - Naruto. The character is named Kyuubi Kitsune/Kurama, which is a nine-tailed fox in mythology. Kyuubi spread the power and spirit of fire, which is used here to represent the powerful Apache Spark. Its nine tails stand for end-to-end multi-tenancy support of this project.