Go to file
fwang12 5eed2f7b50
[KYUUBI #300][TEST] Extract basic jdbc tests to prevent long time UT
fixies #300
Squashed commit of the following:

commit 8a08ef3fd798243c841d16fd14f8b900236f0373
Author: fwang12 <fwang12@ebay.com>
Date:   Tue Jan 19 20:20:33 2021 +0800

    rename SharedKyuubiOperationContext to WithKyuubiServer

commit f923eaffd709cb05b6f3928efaeb09925b63f6a5
Author: fwang12 <fwang12@ebay.com>
Date:   Tue Jan 19 19:45:54 2021 +0800

    style

commit 3dbba6a25d848b265fdfdf29f7bb173a70a12e68
Author: fwang12 <fwang12@ebay.com>
Date:   Tue Jan 19 19:43:33 2021 +0800

    fix ut

commit c5b41d9dd21e416452f46e80c7e4f25a9eb90595
Author: fwang12 <fwang12@ebay.com>
Date:   Tue Jan 19 19:30:43 2021 +0800

    just save

commit f9a53ba4ee7a6e36131cdf986a9de395c9bd8378
Author: fwang12 <fwang12@ebay.com>
Date:   Mon Jan 18 19:29:53 2021 +0800

    style

commit 660d6ff0d9c9a070ffafa697e5259e83cf8da266
Author: fwang12 <fwang12@ebay.com>
Date:   Mon Jan 18 18:40:17 2021 +0800

    Extract basic jdbc tests
2021-01-19 21:26:30 +08:00
.github [KYUUBI #291] Add a quick start guide for DBeaver (#291) 2021-01-16 16:04:15 +08:00
bin add HADOOP_CONF_DIR to classpath 2020-11-19 00:30:25 +08:00
build [KYUUBI #297] Remove unused property from build properties file 2021-01-18 14:26:11 +08:00
conf Spark Conf temp 2020-12-31 15:02:21 +08:00
dev Tune pom 2021-01-09 23:30:46 +08:00
docs [KYUUBI #291] Add a quick start guide for DBeaver (#291) 2021-01-16 16:04:15 +08:00
externals [KYUUBI #296] Add spark 2.4 profile and shade hive-sevice-rpc to engine jar to fix thrift build err (#296) 2021-01-18 14:23:31 +08:00
kyuubi-assembly Tune pom 2021-01-09 23:30:46 +08:00
kyuubi-common [KYUUBI #300][TEST] Extract basic jdbc tests to prevent long time UT 2021-01-19 21:26:30 +08:00
kyuubi-ha Tune pom 2021-01-09 23:30:46 +08:00
kyuubi-main [KYUUBI #300][TEST] Extract basic jdbc tests to prevent long time UT 2021-01-19 21:26:30 +08:00
_config.yml [KYUUBI #295][INFRA] Add licenses for some yml files 2021-01-18 10:24:13 +08:00
.gitignore [KYUUBI #297] Remove unused property from build properties file 2021-01-18 14:26:11 +08:00
.readthedocs.yml [KYUUBI #295][INFRA] Add licenses for some yml files 2021-01-18 10:24:13 +08:00
.travis.yml [KYUUBI #295][INFRA] Add licenses for some yml files 2021-01-18 10:24:13 +08:00
CODE_OF_CONDUCT.md Create CODE_OF_CONDUCT.md 2018-03-07 15:54:22 +08:00
codecov.yml [KYUUBI #295][INFRA] Add licenses for some yml files 2021-01-18 10:24:13 +08:00
LICENSE Initial commit 2017-12-18 17:05:10 +08:00
pom.xml [KYUUBI #296] Add spark 2.4 profile and shade hive-sevice-rpc to engine jar to fix thrift build err (#296) 2021-01-18 14:23:31 +08:00
README.md [MINOR] Show Github Actions build status in readme 2021-01-15 15:40:43 +08:00
scalastyle-config.xml Create Kyuubi Project Spark SQL Engine 2020-06-09 10:34:47 +08:00

Kyuubi

License GitHub top language GitHub release codecov HitCount Travis GitHub Workflow Status Documentation Status DepShield Badge

Kyuubi is a high-performance universal JDBC and SQL execution engine, built on top of Apache Spark. The goal of Kyuubi is to facilitate users to handle big data like ordinary data.

It provides a standardized JDBC interface with easy-to-use data access in big data scenarios. End-users can focus on developing their own business systems and mining data value without having to be aware of the underlying big data platform (compute engines, storage services, metadata management, etc.).

Kyuubi relies on Apache Spark to provide high-performance data query capabilities, and every improvement in the engine's capabilities can help Kyuubi's performance make a qualitative leap. In addition, Kyuubi improves ad-hoc responsiveness through the engine caching, and enhances concurrency through horizontal scaling and load balancing. It provides complete authentication and authentication services to ensure data and metadata security. It provides robust high availability and load balancing to help you guarantee the SLA commitment. It provides a two-level elastic resource management architecture to effectively improve resource utilization while covering the performance and response requirements of all scenarios including interactive, or batch processing and point queries, or full table scans. It embraces Spark and builds an ecosystem on top of it, which allows Kyuubi to quickly expand its existing ecosystem and introduce new features, such as cloud-native support and Data Lake/Lake House support.

Kyuubi's vision is to build on top of Apache Spark and Data Lake technologies to unify the portal and become an ideal data lake management platform. It can support data processing e.g. ETL, and analytics e.g. BI in a pure SQL way. All workloads can be done on one platform, using one copy of data, with one SQL interface.

Online Documentation

Since Kyuubi 1.0.0, the Kyuubi online documentation is hosted by https://readthedocs.org/. You can find the specific version of Kyuubi documentation as listed below.

For 0.8 and earlier versions, please check the project docs folder directly.

Quick Start

Ready? Getting Started with Kyuubi.

Contributing

All bits of help are welcome. You can make various types of contributions to Kyuubi, including the following but not limited to,

  • Help new users in chat channel or share your success stories w/ us - Gitter
  • Improve Documentation - Documentation Status
  • Test releases - GitHub release
  • Improve test coverage - codecov
  • Report bugs and better help developers to reproduce
  • Review changes
  • Make a pull request
  • Promote to others
  • Click the star button if you like this project

Aside

The project took its name from a character of a popular Japanese manga - Naruto. The character is named Kyuubi Kitsune/Kurama, which is a nine-tailed fox in mythology. Kyuubi spread the power and spirit of fire, which is used here to represent the powerful Apache Spark. It's nine tails stands for end-to end multi-tenancy support of this project.