Go to file
Wang, Fei a1a08e7f93
[KYUUBI #7132] Respect kyuubi.session.engine.startup.waitCompletion for wait engine completion
### Why are the changes needed?

We should not fail the batch submission if the submit process is alive and wait engine completion is false.

Especially for spark on kubernetes, the app might failed with NOT_FOUND state if the spark submit process running time more than the submit timeout.

In this PR, if the `kyuubi.session.engine.startup.waitCompletion` is false, when getting the application info, it use current timestamp as submit time to prevent the app failed with NOT_FOUND state due to submit timeout.

### How was this patch tested?

Pass current GA and manually testing.

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes #7132 from turboFei/batch_submit.

Closes #7132

efb06db1c [Wang, Fei] refine
7e453c162 [Wang, Fei] refine
7bca1a7aa [Wang, Fei] Prevent potential timeout durartion polling the application info
15529ab85 [Wang, Fei] prevent metadata manager fail
38335f2f9 [Wang, Fei] refine
9b8a9fde4 [Wang, Fei] comments
11f607daa [Wang, Fei] docs
f2f6ba148 [Wang, Fei] revert
2da0705ad [Wang, Fei] wait for if not wait complete
d84963420 [Wang, Fei] revert check in loop
b4cf50a49 [Wang, Fei] comments
8c262b7ec [Wang, Fei] refine
ecf379b86 [Wang, Fei] Revert conf change
60dc1676e [Wang, Fei] enlarge
4d0aa542a [Wang, Fei] Save
4aea96552 [Wang, Fei] refine
2ad75fcbf [Wang, Fei] nit
a71b11df6 [Wang, Fei] Do not fail batch if the process is alive

Authored-by: Wang, Fei <fwang12@ebay.com>
Signed-off-by: Cheng Pan <chengpan@apache.org>
2025-07-14 01:49:06 +08:00
.github [KYUUBI #7135] Fix cannot access /tmp/engine-archives: No such file or directory 2025-07-11 10:52:08 +08:00
.idea
bin [KYUUBI #6939] Bump Spark 3.5.5 2025-03-03 13:42:09 +08:00
build [KYUUBI #7076] Update known_translations 2025-06-23 23:34:58 +08:00
charts/kyuubi [KYUUBI #7105] [K8S][HELM] Support additional labels for PrometheusRule 2025-06-23 23:48:15 +08:00
conf [KYUUBI #6861] Configuration guide of structured logging for Kyuubi server 2024-12-25 17:22:53 +08:00
dev [KYUUBI #7046] Bump dropwizard metrics version to 4.2.30 2025-04-27 15:56:27 +08:00
docker [KYUUBI #5834] Add Grafana dashboard template 2024-12-24 10:30:50 +08:00
docs [KYUUBI #7132] Respect kyuubi.session.engine.startup.waitCompletion for wait engine completion 2025-07-14 01:49:06 +08:00
extensions [KYUUBI #7122] Support ORC hive table pushdown filter 2025-07-09 13:38:51 +08:00
externals [KYUUBI #6920][FOLLOWUP] Spark SQL engine supports Spark 4.0 2025-05-16 11:47:35 +08:00
grafana [KYUUBI #7072][FOLLOWUP] Fix engine startup permit grafana pannel unit 2025-06-12 23:34:59 -07:00
integration-tests [KYUUBI #7034][FOLLOWUP] Decouple the kubernetes pod name and app name 2025-04-24 22:40:28 -07:00
kyuubi-assembly [KYUUBI #6861] Configuration guide of structured logging for Kyuubi server 2024-12-25 17:22:53 +08:00
kyuubi-common [KYUUBI #7121] Improve operation timeout management with configurable executors 2025-07-09 10:51:30 +08:00
kyuubi-ctl [KYUUBI #6769] [RELEASE] Bump 1.11.0-SNAPSHOT 2024-10-23 17:10:56 +08:00
kyuubi-events [KYUUBI #6769] [RELEASE] Bump 1.11.0-SNAPSHOT 2024-10-23 17:10:56 +08:00
kyuubi-ha [KYUUBI #6790] Fix engine cannot exit when gracefully stopped 2024-11-04 20:05:14 -08:00
kyuubi-hive-beeline [KYUUBI #6769] [RELEASE] Bump 1.11.0-SNAPSHOT 2024-10-23 17:10:56 +08:00
kyuubi-hive-jdbc [KYUUBI #7109] Ignore the ? in backticks 2025-07-07 20:56:36 +08:00
kyuubi-hive-jdbc-shaded [KYUUBI #6769] [RELEASE] Bump 1.11.0-SNAPSHOT 2024-10-23 17:10:56 +08:00
kyuubi-metrics [KYUUBI #7094] Add serverOnly flag for metrics config items 2025-06-11 22:32:46 -07:00
kyuubi-rest-client [KYUUBI #6884] [FEATURE] Support to reassign the batches to alternative kyuubi instance in case kyuubi instance lost 2025-06-22 22:36:51 -07:00
kyuubi-server [KYUUBI #7132] Respect kyuubi.session.engine.startup.waitCompletion for wait engine completion 2025-07-14 01:49:06 +08:00
kyuubi-util [KYUUBI #6769] [RELEASE] Bump 1.11.0-SNAPSHOT 2024-10-23 17:10:56 +08:00
kyuubi-util-scala [KYUUBI #6674] Bump Scalafmt to 3.9.x 2025-02-20 11:00:14 +08:00
kyuubi-zookeeper [KYUUBI #6769] [RELEASE] Bump 1.11.0-SNAPSHOT 2024-10-23 17:10:56 +08:00
licenses
licenses-binary [KYUUBI #6392] Support javax.servlet and jakarta.servlet co-exist 2024-05-20 21:09:30 +08:00
python [KYUUBI #7106] Make response.results.columns optional 2025-06-23 23:28:18 +08:00
.asf.yaml [KYUUBI #6988] [INFRA] Foward GitHub discussions to ASF mailing list 2025-03-17 20:24:57 +08:00
.dockerignore
.gitattributes
.gitignore [KYUUBI #7121] Improve operation timeout management with configurable executors 2025-07-09 10:51:30 +08:00
.rat-excludes [KYUUBI #7028] Persist the kubernetes application terminate state into metastore for app info store fallback 2025-04-27 01:37:27 -07:00
.readthedocs.yaml
.scalafmt.conf [KYUUBI #6674] Bump Scalafmt to 3.9.x 2025-02-20 11:00:14 +08:00
codecov.yml [KYUUBI #5501] Update codecov token and fix codecov reporting on PRs 2023-10-26 14:57:36 +08:00
CONTRIBUTING.md [KYUUBI #6068] Remove community section from user docs 2024-02-21 05:20:42 +00:00
LICENSE [KYUUBI #5484] Remove legacy Web UI 2023-10-25 13:36:00 +08:00
LICENSE-binary [KYUUBI #6861] Configuration guide of structured logging for Kyuubi server 2024-12-25 17:22:53 +08:00
NOTICE [KYUUBI #6434] Add footnote about pyhive origin 2024-06-24 10:30:51 +08:00
NOTICE-binary [KYUUBI #6836] Ship kafka-clients in binary distribution tarball without compression libs 2024-12-05 13:59:29 +08:00
pom.xml [KYUUBI #7103] Bump Delta 4.0.0 and enable Delta tests for Spark 4.0 2025-06-19 13:32:17 +08:00
README.md
scalastyle-config.xml

Kyuubi logo

Project - Documentation - Who's using

Apache Kyuubi

Apache Kyuubi™ is a distributed and multi-tenant gateway to provide serverless SQL on data warehouses and lakehouses.

What is Kyuubi?

Kyuubi provides a pure SQL gateway through Thrift JDBC/ODBC interface for end-users to manipulate large-scale data with pre-programmed and extensible Spark SQL engines. This "out-of-the-box" model minimizes the barriers and costs for end-users to use Spark at the client side. At the server-side, Kyuubi server and engines' multi-tenant architecture provides the administrators a way to achieve computing resource isolation, data security, high availability, high client concurrency, etc.

  • A HiveServer2-like API
  • Multi-tenant Spark Support
  • Running Spark in a serverless way

Target Users

Kyuubi's goal is to make it easy and efficient for anyone to use Spark(maybe other engines soon) and facilitate users to handle big data like ordinary data. Here, anyone means that users do not need to have a Spark technical background but a human language, SQL only. Sometimes, SQL skills are unnecessary when integrating Kyuubi with Apache Superset, which supports rich visualizations and dashboards.

In typical big data production environments with Kyuubi, there should be system administrators and end-users.

  • System administrators: A small group consists of Spark experts responsible for Kyuubi deployment, configuration, and tuning.
  • End-users: Focus on business data of their own, not where it stores, how it computes.

Additionally, the Kyuubi community will continuously optimize the whole system with various features, such as History-Based Optimizer, Auto-tuning, Materialized View, SQL Dialects, Functions, etc.

Usage scenarios

Port workloads from HiveServer2 to Spark SQL

In typical big data production environments, especially secured ones, all bundled services manage access control lists to restricting access to authorized users. For example, Hadoop YARN divides compute resources into queues. With Queue ACLs, it can identify and control which users/groups can take actions on particular queues. Similarly, HDFS ACLs control access of HDFS files by providing a way to set different permissions for specific users/groups.

Apache Spark is a unified analytics engine for large-scale data processing. It provides a Distributed SQL Engine, a.k.a, the Spark Thrift Server(STS), designed to be seamlessly compatible with HiveServer2 and get even better performance.

HiveServer2 can identify and authenticate a caller, and then if the caller also has permissions for the YARN queue and HDFS files, it succeeds. Otherwise, it fails. However, on the one hand, STS is a single Spark application. The user and queue to which STS belongs are uniquely determined at startup. Consequently, STS cannot leverage cluster managers such as YARN and Kubernetes for resource isolation and sharing or control the access for callers by the single user inside the whole system. On the other hand, the Thrift Server is coupled in the Spark driver's JVM process. This coupled architecture puts a high risk on server stability and makes it unable to handle high client concurrency or apply high availability such as load balancing as it is stateful.

Kyuubi extends the use of STS in a multi-tenant model based on a unified interface and relies on the concept of multi-tenancy to interact with cluster managers to finally gain the ability of resources sharing/isolation and data security. The loosely coupled architecture of the Kyuubi server and engine dramatically improves the client concurrency and service stability of the service itself.

DataLake/Lakehouse Support

The vision of Kyuubi is to unify the portal and become an easy-to-use data lake management platform. Different kinds of workloads, such as ETL processing and BI analytics, can be supported by one platform, using one copy of data, with one SQL interface.

  • Logical View support via Kyuubi DataLake Metadata APIs
  • Multiple Catalogs support
  • SQL Standard Authorization support for DataLake(coming)

Cloud Native Support

Kyuubi can deploy its engines on different kinds of Cluster Managers, such as, Hadoop YARN, Kubernetes, etc.

The Kyuubi Ecosystem(present and future)

The figure below shows our vision for the Kyuubi Ecosystem. Some of them have been realized, some in development, and others would not be possible without your help.

Online Documentation Documentation Status

Quick Start

Ready? Getting Started with Kyuubi.

Contributing

Project & Community Status

Aside

The project took its name from a character of a popular Japanese manga - Naruto. The character is named Kyuubi Kitsune/Kurama, which is a nine-tailed fox in mythology. Kyuubi spread the power and spirit of fire, which is used here to represent the powerful Apache Spark. Its nine tails stand for end-to-end multi-tenancy support of this project.