<!-- Thanks for sending a pull request! Here are some tips for you: 1. If this is your first time, please read our contributor guidelines: https://kyuubi.readthedocs.io/en/latest/community/contributions.html 2. If the PR is related to an issue in https://github.com/NetEase/kyuubi/issues, add '[KYUUBI #XXXX]' in your PR title, e.g., '[KYUUBI #XXXX] Your PR title ...'. 3. If the PR is unfinished, add '[WIP]' in your PR title, e.g., '[WIP][KYUUBI #XXXX] Your PR title ...'. --> ### _Why are the changes needed?_ <!-- Please clarify why the changes are needed. For instance, 1. If you add a feature, you can talk about the use case of it. 2. If you fix a bug, you can clarify why it is a bug. --> `OperationLog` is used to divert SQL statement-specific logs to a temporary local file at the engine side and to be fetched to the client-side(e.g. beeline). It is an InheritableThreadLocal variable in which thread the `SparkSession` instance lays. Inside Spark, there are a lot of threads created via the java `Executors`, for which the `OperationLog` cannot be inherited, so the client-side lacks information on how their jobs are running. This is not user-friendly especially for queries that contain a lot of jobs. In this PR, we add a new SparkListener for SQL operation which diverts the SQL statement-specific logs by `spark.jobGroup.id`. The listener will dump Job/Stage and Task summaries to the `OperationLog` ### _How was this patch tested?_ - [x] Add some test cases that check the changes thoroughly including negative and positive cases if possible - [ ] Add screenshots for manual tests if appropriate - [x] [Run test](https://kyuubi.readthedocs.io/en/latest/tools/testing.html#running-tests) locally before make a pull request ```logtalk 2021-05-08 10:25:54.301 INFO kyuubi.SQLOperationListener: Query [3ac86686-f502-4f2d-9a98-d763e5b9548c]: Job 3 started with 4 stages, 1 active jobs running 2021-05-08 10:25:54.303 INFO kyuubi.SQLOperationListener: Query [3ac86686-f502-4f2d-9a98-d763e5b9548c]: Stage 6 started with 5 tasks, 1 active stages running 2021-05-08 10:25:54.347 INFO kyuubi.SQLOperationListener: Finished stage: Stage(6, 0); Name: 'collect at ExecuteStatement.scala:82'; Status: succeeded; numTasks: 5; Took: 44 msec 2021-05-08 10:25:54.347 INFO scheduler.StatsReportListener: task runtime:(count: 5, mean: 36.400000, stdev: 1.019804, max: 38.000000, min: 35.000000) 2021-05-08 10:25:54.347 INFO scheduler.StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100% 2021-05-08 10:25:54.347 INFO scheduler.StatsReportListener: 35.0 ms 35.0 ms 35.0 ms 36.0 ms 36.0 ms 37.0 ms 38.0 ms 38.0 ms 38.0 ms 2021-05-08 10:25:54.348 INFO scheduler.StatsReportListener: shuffle bytes written:(count: 5, mean: 59.000000, stdev: 0.000000, max: 59.000000, min: 59.000000) 2021-05-08 10:25:54.348 INFO scheduler.StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100% 2021-05-08 10:25:54.348 INFO scheduler.StatsReportListener: 59.0 B 59.0 B 59.0 B 59.0 B 59.0 B 59.0 B 59.0 B 59.0 B 59.0 B 2021-05-08 10:25:54.348 INFO scheduler.StatsReportListener: fetch wait time:(count: 5, mean: 0.000000, stdev: 0.000000, max: 0.000000, min: 0.000000) 2021-05-08 10:25:54.348 INFO scheduler.StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100% 2021-05-08 10:25:54.348 INFO scheduler.StatsReportListener: 0.0 ms 0.0 ms 0.0 ms 0.0 ms 0.0 ms 0.0 ms 0.0 ms 0.0 ms 0.0 ms 2021-05-08 10:25:54.349 INFO scheduler.StatsReportListener: remote bytes read:(count: 5, mean: 0.000000, stdev: 0.000000, max: 0.000000, min: 0.000000) 2021-05-08 10:25:54.349 INFO scheduler.StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100% 2021-05-08 10:25:54.349 INFO scheduler.StatsReportListener: 0.0 B 0.0 B 0.0 B 0.0 B 0.0 B 0.0 B 0.0 B 0.0 B 0.0 B 2021-05-08 10:25:54.349 INFO scheduler.StatsReportListener: task result size:(count: 5, mean: 1965.000000, stdev: 0.000000, max: 1965.000000, min: 1965.000000) 2021-05-08 10:25:54.349 INFO scheduler.StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100% 2021-05-08 10:25:54.349 INFO scheduler.StatsReportListener: 1965.0 B 1965.0 B 1965.0 B 1965.0 B 1965.0 B 1965.0 B 1965.0 B 1965.0 B 1965.0 B 2021-05-08 10:25:54.349 INFO scheduler.StatsReportListener: executor (non-fetch) time pct: (count: 5, mean: 82.948068, stdev: 1.278517, max: 84.210526, min: 80.555556) 2021-05-08 10:25:54.349 INFO scheduler.StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100% 2021-05-08 10:25:54.349 INFO scheduler.StatsReportListener: 81 % 81 % 81 % 83 % 83 % 84 % 84 % 84 % 84 % 2021-05-08 10:25:54.350 INFO scheduler.StatsReportListener: fetch wait time pct: (count: 5, mean: 0.000000, stdev: 0.000000, max: 0.000000, min: 0.000000) 2021-05-08 10:25:54.350 INFO scheduler.StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100% 2021-05-08 10:25:54.350 INFO scheduler.StatsReportListener: 0 % 0 % 0 % 0 % 0 % 0 % 0 % 0 % 0 % 2021-05-08 10:25:54.350 INFO scheduler.StatsReportListener: other time pct: (count: 5, mean: 17.051932, stdev: 1.278517, max: 19.444444, min: 15.789474) 2021-05-08 10:25:54.350 INFO scheduler.StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100% 2021-05-08 10:25:54.350 INFO scheduler.StatsReportListener: 16 % 16 % 16 % 16 % 17 % 17 % 19 % 19 % 19 % 2021-05-08 10:25:54.357 INFO kyuubi.SQLOperationListener: Query [3ac86686-f502-4f2d-9a98-d763e5b9548c]: Stage 7 started with 300 tasks, 1 active stages running 2021-05-08 10:25:54.644 INFO kyuubi.SQLOperationListener: Finished stage: Stage(7, 0); Name: 'collect at ExecuteStatement.scala:82'; Status: succeeded; numTasks: 300; Took: 287 msec 2021-05-08 10:25:54.645 INFO scheduler.StatsReportListener: task runtime:(count: 300, mean: 15.203333, stdev: 9.396559, max: 45.000000, min: 4.000000) 2021-05-08 10:25:54.645 INFO scheduler.StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100% 2021-05-08 10:25:54.645 INFO scheduler.StatsReportListener: 4.0 ms 5.0 ms 6.0 ms 8.0 ms 13.0 ms 18.0 ms 31.0 ms 38.0 ms 45.0 ms 2021-05-08 10:25:54.645 INFO scheduler.StatsReportListener: shuffle bytes written:(count: 300, mean: 56.030000, stdev: 0.298496, max: 59.000000, min: 56.000000) 2021-05-08 10:25:54.645 INFO scheduler.StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100% 2021-05-08 10:25:54.645 INFO scheduler.StatsReportListener: 56.0 B 56.0 B 56.0 B 56.0 B 56.0 B 56.0 B 56.0 B 56.0 B 59.0 B 2021-05-08 10:25:54.646 INFO scheduler.StatsReportListener: fetch wait time:(count: 300, mean: 0.000000, stdev: 0.000000, max: 0.000000, min: 0.000000) 2021-05-08 10:25:54.646 INFO scheduler.StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100% 2021-05-08 10:25:54.646 INFO scheduler.StatsReportListener: 0.0 ms 0.0 ms 0.0 ms 0.0 ms 0.0 ms 0.0 ms 0.0 ms 0.0 ms 0.0 ms 2021-05-08 10:25:54.646 INFO scheduler.StatsReportListener: remote bytes read:(count: 300, mean: 0.000000, stdev: 0.000000, max: 0.000000, min: 0.000000) 2021-05-08 10:25:54.646 INFO scheduler.StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100% 2021-05-08 10:25:54.646 INFO scheduler.StatsReportListener: 0.0 B 0.0 B 0.0 B 0.0 B 0.0 B 0.0 B 0.0 B 0.0 B 0.0 B 2021-05-08 10:25:54.647 INFO scheduler.StatsReportListener: task result size:(count: 300, mean: 2865.150000, stdev: 9.371633, max: 2906.000000, min: 2863.000000) 2021-05-08 10:25:54.647 INFO scheduler.StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100% 2021-05-08 10:25:54.647 INFO scheduler.StatsReportListener: 2.8 KiB 2.8 KiB 2.8 KiB 2.8 KiB 2.8 KiB 2.8 KiB 2.8 KiB 2.8 KiB 2.8 KiB 2021-05-08 10:25:54.648 INFO scheduler.StatsReportListener: executor (non-fetch) time pct: (count: 300, mean: 66.577940, stdev: 18.397778, max: 95.121951, min: 10.526316) 2021-05-08 10:25:54.648 INFO scheduler.StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100% 2021-05-08 10:25:54.648 INFO scheduler.StatsReportListener: 11 % 33 % 40 % 56 % 71 % 81 % 87 % 88 % 95 % 2021-05-08 10:25:54.649 INFO scheduler.StatsReportListener: fetch wait time pct: (count: 300, mean: 0.000000, stdev: 0.000000, max: 0.000000, min: 0.000000) 2021-05-08 10:25:54.649 INFO scheduler.StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100% 2021-05-08 10:25:54.649 INFO scheduler.StatsReportListener: 0 % 0 % 0 % 0 % 0 % 0 % 0 % 0 % 0 % 2021-05-08 10:25:54.650 INFO scheduler.StatsReportListener: other time pct: (count: 300, mean: 33.422060, stdev: 18.397778, max: 89.473684, min: 4.878049) 2021-05-08 10:25:54.650 INFO scheduler.StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100% 2021-05-08 10:25:54.650 INFO scheduler.StatsReportListener: 5 % 12 % 13 % 19 % 29 % 44 % 60 % 67 % 89 % 2021-05-08 10:25:54.650 INFO kyuubi.SQLOperationListener: Query [3ac86686-f502-4f2d-9a98-d763e5b9548c]: Stage 8 started with 1 tasks, 1 active stages running 2021-05-08 10:25:54.733 INFO kyuubi.SQLOperationListener: Finished stage: Stage(8, 0); Name: 'collect at ExecuteStatement.scala:82'; Status: succeeded; numTasks: 1; Took: 87 msec 2021-05-08 10:25:54.733 INFO scheduler.StatsReportListener: task runtime:(count: 1, mean: 81.000000, stdev: 0.000000, max: 81.000000, min: 81.000000) 2021-05-08 10:25:54.733 INFO scheduler.StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100% 2021-05-08 10:25:54.733 INFO scheduler.StatsReportListener: 81.0 ms 81.0 ms 81.0 ms 81.0 ms 81.0 ms 81.0 ms 81.0 ms 81.0 ms 81.0 ms 2021-05-08 10:25:54.734 INFO scheduler.StatsReportListener: shuffle bytes written:(count: 1, mean: 59.000000, stdev: 0.000000, max: 59.000000, min: 59.000000) 2021-05-08 10:25:54.734 INFO scheduler.StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100% 2021-05-08 10:25:54.734 INFO scheduler.StatsReportListener: 59.0 B 59.0 B 59.0 B 59.0 B 59.0 B 59.0 B 59.0 B 59.0 B 59.0 B 2021-05-08 10:25:54.735 INFO scheduler.StatsReportListener: fetch wait time:(count: 1, mean: 0.000000, stdev: 0.000000, max: 0.000000, min: 0.000000) 2021-05-08 10:25:54.735 INFO scheduler.StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100% 2021-05-08 10:25:54.735 INFO scheduler.StatsReportListener: 0.0 ms 0.0 ms 0.0 ms 0.0 ms 0.0 ms 0.0 ms 0.0 ms 0.0 ms 0.0 ms 2021-05-08 10:25:54.735 INFO scheduler.StatsReportListener: remote bytes read:(count: 1, mean: 0.000000, stdev: 0.000000, max: 0.000000, min: 0.000000) 2021-05-08 10:25:54.735 INFO scheduler.StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100% 2021-05-08 10:25:54.735 INFO scheduler.StatsReportListener: 0.0 B 0.0 B 0.0 B 0.0 B 0.0 B 0.0 B 0.0 B 0.0 B 0.0 B 2021-05-08 10:25:54.735 INFO scheduler.StatsReportListener: task result size:(count: 1, mean: 3065.000000, stdev: 0.000000, max: 3065.000000, min: 3065.000000) 2021-05-08 10:25:54.735 INFO scheduler.StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100% 2021-05-08 10:25:54.735 INFO scheduler.StatsReportListener: 3.0 KiB 3.0 KiB 3.0 KiB 3.0 KiB 3.0 KiB 3.0 KiB 3.0 KiB 3.0 KiB 3.0 KiB 2021-05-08 10:25:54.735 INFO scheduler.StatsReportListener: executor (non-fetch) time pct: (count: 1, mean: 95.061728, stdev: 0.000000, max: 95.061728, min: 95.061728) 2021-05-08 10:25:54.735 INFO scheduler.StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100% 2021-05-08 10:25:54.735 INFO scheduler.StatsReportListener: 95 % 95 % 95 % 95 % 95 % 95 % 95 % 95 % 95 % 2021-05-08 10:25:54.736 INFO scheduler.StatsReportListener: fetch wait time pct: (count: 1, mean: 0.000000, stdev: 0.000000, max: 0.000000, min: 0.000000) 2021-05-08 10:25:54.736 INFO scheduler.StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100% 2021-05-08 10:25:54.736 INFO scheduler.StatsReportListener: 0 % 0 % 0 % 0 % 0 % 0 % 0 % 0 % 0 % 2021-05-08 10:25:54.736 INFO scheduler.StatsReportListener: other time pct: (count: 1, mean: 4.938272, stdev: 0.000000, max: 4.938272, min: 4.938272) 2021-05-08 10:25:54.736 INFO scheduler.StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100% 2021-05-08 10:25:54.736 INFO scheduler.StatsReportListener: 5 % 5 % 5 % 5 % 5 % 5 % 5 % 5 % 5 % 2021-05-08 10:25:54.739 INFO kyuubi.SQLOperationListener: Query [3ac86686-f502-4f2d-9a98-d763e5b9548c]: Stage 9 started with 200 tasks, 1 active stages running 2021-05-08 10:25:54.844 INFO kyuubi.SQLOperationListener: Finished stage: Stage(9, 0); Name: 'collect at ExecuteStatement.scala:82'; Status: succeeded; numTasks: 200; Took: 105 msec 2021-05-08 10:25:54.845 INFO scheduler.DAGScheduler: Job 3 finished: collect at ExecuteStatement.scala:82, took 0.547547 s 2021-05-08 10:25:54.845 INFO scheduler.StatsReportListener: task runtime:(count: 200, mean: 15.100000, stdev: 11.464729, max: 48.000000, min: 4.000000) 2021-05-08 10:25:54.845 INFO scheduler.StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100% 2021-05-08 10:25:54.845 INFO scheduler.StatsReportListener: 4.0 ms 6.0 ms 7.0 ms 8.0 ms 10.0 ms 16.0 ms 40.0 ms 43.0 ms 48.0 ms 2021-05-08 10:25:54.845 INFO scheduler.StatsReportListener: shuffle bytes written:(count: 200, mean: 0.000000, stdev: 0.000000, max: 0.000000, min: 0.000000) 2021-05-08 10:25:54.846 INFO scheduler.StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100% 2021-05-08 10:25:54.846 INFO scheduler.StatsReportListener: 0.0 B 0.0 B 0.0 B 0.0 B 0.0 B 0.0 B 0.0 B 0.0 B 0.0 B 2021-05-08 10:25:54.846 INFO scheduler.StatsReportListener: fetch wait time:(count: 200, mean: 0.000000, stdev: 0.000000, max: 0.000000, min: 0.000000) 2021-05-08 10:25:54.846 INFO scheduler.StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100% 2021-05-08 10:25:54.846 INFO scheduler.StatsReportListener: 0.0 ms 0.0 ms 0.0 ms 0.0 ms 0.0 ms 0.0 ms 0.0 ms 0.0 ms 0.0 ms 2021-05-08 10:25:54.846 INFO scheduler.StatsReportListener: remote bytes read:(count: 200, mean: 0.000000, stdev: 0.000000, max: 0.000000, min: 0.000000) 2021-05-08 10:25:54.846 INFO scheduler.StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100% 2021-05-08 10:25:54.846 INFO scheduler.StatsReportListener: 0.0 B 0.0 B 0.0 B 0.0 B 0.0 B 0.0 B 0.0 B 0.0 B 0.0 B 2021-05-08 10:25:54.847 INFO scheduler.StatsReportListener: task result size:(count: 200, mean: 2326.135000, stdev: 26.673897, max: 2414.000000, min: 2311.000000) 2021-05-08 10:25:54.847 INFO scheduler.StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100% 2021-05-08 10:25:54.847 INFO scheduler.StatsReportListener: 2.3 KiB 2.3 KiB 2.3 KiB 2.3 KiB 2.3 KiB 2.3 KiB 2.3 KiB 2.3 KiB 2.4 KiB 2021-05-08 10:25:54.847 INFO scheduler.StatsReportListener: executor (non-fetch) time pct: (count: 200, mean: 2.971561, stdev: 8.036931, max: 47.727273, min: 0.000000) 2021-05-08 10:25:54.847 INFO scheduler.StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100% 2021-05-08 10:25:54.848 INFO scheduler.StatsReportListener: 0 % 0 % 0 % 0 % 0 % 0 % 10 % 18 % 48 % 2021-05-08 10:25:54.848 INFO scheduler.StatsReportListener: fetch wait time pct: (count: 200, mean: 0.000000, stdev: 0.000000, max: 0.000000, min: 0.000000) 2021-05-08 10:25:54.848 INFO scheduler.StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100% 2021-05-08 10:25:54.848 INFO scheduler.StatsReportListener: 0 % 0 % 0 % 0 % 0 % 0 % 0 % 0 % 0 % 2021-05-08 10:25:54.848 INFO scheduler.StatsReportListener: other time pct: (count: 200, mean: 97.028439, stdev: 8.036931, max: 100.000000, min: 52.272727) 2021-05-08 10:25:54.848 INFO scheduler.StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100% 2021-05-08 10:25:54.849 INFO scheduler.StatsReportListener: 52 % 82 % 90 % 100 % 100 % 100 % 100 % 100 % 100 % 2021-05-08 10:25:54.849 INFO kyuubi.SQLOperationListener: Query [3ac86686-f502-4f2d-9a98-d763e5b9548c]: Job 3 succeeded, 0 active jobs running 2021-05-08 10:25:54.857 INFO codegen.CodeGenerator: Code generated in 7.116325 ms 2021-05-08 10:25:54.858 INFO operation.ExecuteStatement: Processing kentyao's query[3ac86686-f502-4f2d-9a98-d763e5b9548c]: RUNNING_STATE -> FINISHED_STATE, statement: select /*+ REPARTITION(200) */ count(1) from (select /*+ REPARTITION(300, a) */ a from values(1),(1),(1),(2),(3) t(a)), time taken: 0.702 seconds ``` Closes #623 from yaooqinn/log2. Closes #623 abeff64 [Kent Yao] Improve OperationLog to deliver detail stats of sql statement fdf3b28 [Kent Yao] Improve OperationLog to deliver detail stats of sql statement 783a594 [Kent Yao] Improve OperationLog to deliver detail stats of sql statement Authored-by: Kent Yao <yao@apache.org> Signed-off-by: Kent Yao <yao@apache.org> |
||
|---|---|---|
| .github | ||
| .idea | ||
| bin | ||
| build | ||
| conf | ||
| dev | ||
| docs | ||
| externals | ||
| kyuubi-assembly | ||
| kyuubi-common | ||
| kyuubi-ctl | ||
| kyuubi-ha | ||
| kyuubi-main | ||
| kyuubi-metrics | ||
| kyuubi-zookeeper | ||
| licenses-binary | ||
| _config.yml | ||
| .gitignore | ||
| .readthedocs.yml | ||
| .travis.yml | ||
| CODE_OF_CONDUCT.md | ||
| codecov.yml | ||
| LICENSE | ||
| LICENSE-binary | ||
| NOTICE | ||
| pom.xml | ||
| README.md | ||
| scalastyle-config.xml | ||
What is Kyuubi?
Kyuubi is a distributed multi-tenant Thrift JDBC/ODBC server for large-scale data management, processing, and analytics, built on top of Apache Spark and designed to support more engines (i.e., Flink). It has been open-sourced by NetEase since 2018. We are aiming to make Kyuubi an "out-of-the-box" tool for data warehouses and data lakes.
Kyuubi provides a pure SQL gateway through Thrift JDBC/ODBC interface for end-users to manipulate large-scale data with pre-programmed and extensible Spark SQL engines. This "out-of-the-box" model minimizes the barriers and costs for end-users to use Spark at the client side. At the server-side, Kyuubi server and engines' multi-tenant architecture provides the administrators a way to achieve computing resource isolation, data security, high availability, high client concurrency, etc.
- A HiveServer2-like API
- Multi-tenant Spark Support
- Running Spark in a serverless way
Target Users
Kyuubi's goal is to make it easy and efficient for anyone to use Spark(maybe other engines soon) and facilitate users to handle big data like ordinary data. Here, anyone means that users do not need to have a Spark technical background but a human language, SQL only. Sometimes, SQL skills are unnecessary when integrating Kyuubi with Apache Superset, which supports rich visualizations and dashboards.
In typical big data production environments with Kyuubi, there should be system administrators and end-users.
- System administrators: A small group consists of Spark experts responsible for Kyuubi deployment, configuration, and tuning.
- End-users: Focus on business data of their own, not where it stores, how it computes.
Additionally, the Kyuubi community will continuously optimize the whole system with various features, such as History-Based Optimizer, Auto-tuning, Materialized View, SQL Dialects, Functions, e.t.c.
Usage scenarios
Port workloads from HiveServer2 to Spark SQL
In typical big data production environments, especially secured ones, all bundled services manage access control lists to restricting access to authorized users. For example, Hadoop YARN divides compute resources into queues. With Queue ACLs, it can identify and control which users/groups can take actions on particular queues. Similarly, HDFS ACLs control access of HDFS files by providing a way to set different permissions for specific users/groups.
Apache Spark is a unified analytics engine for large-scale data processing. It provides a Distributed SQL Engine, a.k.a, the Spark Thrift Server(STS), designed to be seamlessly compatible with HiveServer2 and get even better performance.
HiveServer2 can identify and authenticate a caller, and then if the caller also has permissions for the YARN queue and HDFS files, it succeeds. Otherwise, it fails. However, on the one hand, STS is a single Spark application. The user and queue to which STS belongs are uniquely determined at startup. Consequently, STS cannot leverage cluster managers such as YARN and Kubernetes for resource isolation and sharing or control the access for callers by the single user inside the whole system. On the other hand, the Thrift Server is coupled in the Spark driver's JVM process. This coupled architect puts a high risk on server stability and makes it unable to handle high client concurrency or apply high availability such as load balancing as it is stateful.
Kyuubi extends the use of STS in a multi-tenant model based on a unified interface and relies on the concept of multi-tenancy to interact with cluster managers to finally gain the ability of resources sharing/isolation and data security. The loosely coupled architecture of the Kyuubi server and engine dramatically improves the client concurrency and service stability of the service itself.
DataLake/LakeHouse Support
The vision of Kyuubi is to unify the portal and become an easy-to-use data lake management platform. Different kinds of workloads, such as ETL processing and BI analytics, can be supported by one platform, using one copy of data, with one SQL interface.
- Logical View support via Kyuubi DataLake Metadata APIs
- Multiple Catalogs support
- SQL Standard Authorization support for DataLake(coming)
Cloud Native Support
Kyuubi can deploy its engines on different kinds of Cluster Managers, such as, Hadoop YARN, Kubernetes, etc.
The Kyuubi Ecosystem(present and future)
The figure below shows our vision for the Kyuubi Ecosystem. Some of them have been realized, some in development, and others would not be possible without your help.
Online Documentation
Since Kyuubi 1.0.0, the Kyuubi online documentation is hosted by https://readthedocs.org/. You can find the specific version of Kyuubi documentation as listed below.
For 0.8 and earlier versions, please check the Github Pages directly.
Quick Start
Ready? Getting Started with Kyuubi.
Contributing
All bits of help are welcome. You can make various types of contributions to Kyuubi, including the following but not limited to,
- Help new users in chat channel or share your success stories with us -
- Improve Documentation -
- Test releases -
- Improve test coverage -
- Report bugs and better help developers to reproduce
- Review changes
- Make a pull request
- Promote to others
- Click the star button if you like this project
Before you start, we recommend that you check the Contribution Guidelines first.
Aside
The project took its name from a character of a popular Japanese manga - Naruto.
The character is named Kyuubi Kitsune/Kurama, which is a nine-tailed fox in mythology.
Kyuubi spread the power and spirit of fire, which is used here to represent the powerful Apache Spark.
Its nine tails stand for end-to-end multi-tenancy support of this project.
License
This project is licensed under the Apache 2.0 License. See the LICENSE file for details.



