Go to file
Fei Wang 53d59a02bf [KYUUBI #1769] [BUILD] Migrate from log4j1 to log4j2
<!--
Thanks for sending a pull request!

Here are some tips for you:
  1. If this is your first time, please read our contributor guidelines: https://kyuubi.readthedocs.io/en/latest/community/contributions.html
  2. If the PR is related to an issue in https://github.com/apache/incubator-kyuubi/issues, add '[KYUUBI #XXXX]' in your PR title, e.g., '[KYUUBI #XXXX] Your PR title ...'.
  3. If the PR is unfinished, add '[WIP]' in your PR title, e.g., '[WIP][KYUUBI #XXXX] Your PR title ...'.
-->

### _Why are the changes needed?_
<!--
Please clarify why the changes are needed. For instance,
  1. If you add a feature, you can talk about the use case of it.
  2. If you fix a bug, you can clarify why it is a bug.
-->
This patch proposes to migrate from log4j1 to log4j2 in Kyuubi.
Refer the spark patch: https://github.com/apache/spark/pull/34895

### Does this PR introduce any user-facing change?
Yes. Users may need to rewrite their log4j properties file for log4j2. As of version 2.4, log4j now supports configuration via properties files. Note that the property syntax is not the same as the syntax used in log4j 1, but during the migration I found the syntax is pretty close so the migration should be straightforward.

### _How was this patch tested?_
Passed all existing tests.

Closes #1769 from turboFei/log4j2.

Closes #1769

8613779c [Fei Wang] remove log4j dependencies from spark-sql-engine
b2fe6dba [Fei Wang] Use String to present default log level
762e2d03 [Fei Wang] only remove org.apache.logging.log4j:log4j-slf4j-impl
8a912086 [Fei Wang] remove dependencies from spark-sql-engine
7e3a4980 [Fei Wang] address commments
051f49f5 [Fei Wang] address comments
85316a0b [Fei Wang] Keep compatible with log4j12
01d1a84e [Fei Wang] for log4j1
b9e17e1b [Fei Wang] refactor
e24391ed [Fei Wang] revert log count
38803002 [Fei Wang] add log4j2.properties.template
4f0b22fc [Fei Wang] save
7ce84119 [Fei Wang] modify log level
1ea5ca53 [Fei Wang] add log4j to engine
c4a86d4d [Fei Wang] use AbstractFilter
27b08b6a [Fei Wang] remove more
8cc15ae7 [Fei Wang] reformat
c13ec29e [Fei Wang] save temporarily
33a38e2e [Fei Wang] exclude log4j12 from spark-sql
9129a64a [Fei Wang] refactor
5362b43d [Fei Wang] make it run at first
7f27f519 [Fei Wang] more
56f4f1ff [Fei Wang] fix logging
a74b6d37 [Fei Wang] start appender
dea964aa [Fei Wang] fix build erorr at first
e20b7500 [Fei Wang] address comments
2ec02b4d [Fei Wang] fix LogDivertAppender
dded1290 [Fei Wang] more
c63e0008 [Fei Wang] add log4j2.properties

Authored-by: Fei Wang <fwang12@ebay.com>
Signed-off-by: Fei Wang <fwang12@ebay.com>
2022-01-26 10:53:16 +08:00
.github [KYUUBI #1808] Update the link in the PULL_REQUEST_TEMPLATE 2022-01-19 19:32:35 +08:00
.idea [KYUUBI #870] [MISC] Migrate from NetEase to Apache 2021-07-28 21:31:46 +08:00
bin [KYUUBI #1787] Export FLINK_ENGINE_HOME in load-kyuubi-env script 2022-01-18 18:13:16 +08:00
build [KYUUBI #1840] Minor fix for general_incubator_vote script 2022-01-26 09:24:45 +08:00
conf [KYUUBI #1769] [BUILD] Migrate from log4j1 to log4j2 2022-01-26 10:53:16 +08:00
dev [KYUUBI #1769] [BUILD] Migrate from log4j1 to log4j2 2022-01-26 10:53:16 +08:00
docker [KYUUBI #1835] Fix error to helm manifest 2022-01-25 15:38:10 +08:00
docs [KYUUBI #1769] [BUILD] Migrate from log4j1 to log4j2 2022-01-26 10:53:16 +08:00
externals [KYUUBI #1769] [BUILD] Migrate from log4j1 to log4j2 2022-01-26 10:53:16 +08:00
integration-tests [KYUUBI #1769] [BUILD] Migrate from log4j1 to log4j2 2022-01-26 10:53:16 +08:00
kyuubi-assembly [KYUUBI #1769] [BUILD] Migrate from log4j1 to log4j2 2022-01-26 10:53:16 +08:00
kyuubi-common [KYUUBI #1769] [BUILD] Migrate from log4j1 to log4j2 2022-01-26 10:53:16 +08:00
kyuubi-ctl [KYUUBI #1769] [BUILD] Migrate from log4j1 to log4j2 2022-01-26 10:53:16 +08:00
kyuubi-ha [KYUUBI #1769] [BUILD] Migrate from log4j1 to log4j2 2022-01-26 10:53:16 +08:00
kyuubi-hive-beeline [KYUUBI #1769] [BUILD] Migrate from log4j1 to log4j2 2022-01-26 10:53:16 +08:00
kyuubi-hive-jdbc [KYUUBI #1714] Add executeScala api for KyuubiStatement 2022-01-16 20:23:51 +08:00
kyuubi-hive-jdbc-shaded [KYUUBI #1716] Unify Hive deps for server and client 2022-01-12 16:36:40 +08:00
kyuubi-metrics [KYUUBI #1769] [BUILD] Migrate from log4j1 to log4j2 2022-01-26 10:53:16 +08:00
kyuubi-server [KYUUBI #1769] [BUILD] Migrate from log4j1 to log4j2 2022-01-26 10:53:16 +08:00
kyuubi-zookeeper [KYUUBI #1769] [BUILD] Migrate from log4j1 to log4j2 2022-01-26 10:53:16 +08:00
licenses-binary [KYUUBI #1413] Improve usage of zookeeper cli 2021-11-18 19:17:11 +08:00
tools/spark-block-cleaner [KYUUBI #1441] [BUILD] Bump 1.5.0-SNAPSHOT 2021-11-23 22:22:26 +08:00
.asf.yaml Fix notifications indention in .asf.yaml (#812) 2021-07-15 23:51:31 +08:00
.dockerignore [KYUUBI #972] [LICENSE] Exclude binary/doc files from source release 2021-08-22 16:31:36 +08:00
.gitattributes [KYUUBI #976] [BUILD] Fix source release build 2021-08-25 13:35:25 +08:00
.gitignore [KYUUBI #1487] Correct the KEYS link and add script to generate vote and announcement 2021-12-06 20:31:59 +08:00
.rat-excludes [KYUUBI #1717] Extract rat configurations from pom to file 2022-01-11 11:52:18 +08:00
.readthedocs.yml [KYUUBI #951] [LICENSE] Add license header on all docs 2021-08-19 09:53:52 +08:00
.scalafmt.conf [KYUUBI #1383] Leverage Scalafmt to auto format scala code 2021-11-22 17:51:23 +08:00
.travis.yml [KYUUBI #1112] Upgrade scala to 2.12.15 2021-09-17 21:12:56 +08:00
codecov.yml [KYUUBI #951] [LICENSE] Add license header on all docs 2021-08-19 09:53:52 +08:00
DISCLAIMER [KYUUBI #1020] Fix incubating issue 2021-09-03 19:13:53 +08:00
LICENSE [KYUUBI #1643] Implement GetFunctions operation 2022-01-14 17:08:19 +08:00
LICENSE-binary [KYUUBI #1658] Revamp swagger ui dependencies 2021-12-31 09:22:24 +08:00
NOTICE [KYUUBI #1722] [NOTICE] Update NOTICE for 2022 2022-01-11 14:11:04 +08:00
NOTICE-binary [KYUUBI #1722] [NOTICE] Update NOTICE for 2022 2022-01-11 14:11:04 +08:00
pom.xml [KYUUBI #1769] [BUILD] Migrate from log4j1 to log4j2 2022-01-26 10:53:16 +08:00
README.md [KYUUBI #1208] Change the generation rule for the release badge 2021-10-26 23:29:10 +08:00
scalastyle-config.xml [KYUUBI #563] [TEST] Bump scalatest 3.2.8 2021-04-22 20:45:43 +08:00

License GitHub top language Latest tag codecov Travis GitHub Workflow Status Documentation Status

What is Kyuubi?

Kyuubi is a distributed multi-tenant Thrift JDBC/ODBC server for large-scale data management, processing, and analytics, built on top of Apache Spark and designed to support more engines (i.e., Flink). It has been open-sourced by NetEase since 2018. We are aiming to make Kyuubi an "out-of-the-box" tool for data warehouses and data lakes.

Kyuubi provides a pure SQL gateway through Thrift JDBC/ODBC interface for end-users to manipulate large-scale data with pre-programmed and extensible Spark SQL engines. This "out-of-the-box" model minimizes the barriers and costs for end-users to use Spark at the client side. At the server-side, Kyuubi server and engines' multi-tenant architecture provides the administrators a way to achieve computing resource isolation, data security, high availability, high client concurrency, etc.

  • A HiveServer2-like API
  • Multi-tenant Spark Support
  • Running Spark in a serverless way

Target Users

Kyuubi's goal is to make it easy and efficient for anyone to use Spark(maybe other engines soon) and facilitate users to handle big data like ordinary data. Here, anyone means that users do not need to have a Spark technical background but a human language, SQL only. Sometimes, SQL skills are unnecessary when integrating Kyuubi with Apache Superset, which supports rich visualizations and dashboards.

In typical big data production environments with Kyuubi, there should be system administrators and end-users.

  • System administrators: A small group consists of Spark experts responsible for Kyuubi deployment, configuration, and tuning.
  • End-users: Focus on business data of their own, not where it stores, how it computes.

Additionally, the Kyuubi community will continuously optimize the whole system with various features, such as History-Based Optimizer, Auto-tuning, Materialized View, SQL Dialects, Functions, e.t.c.

Usage scenarios

Port workloads from HiveServer2 to Spark SQL

In typical big data production environments, especially secured ones, all bundled services manage access control lists to restricting access to authorized users. For example, Hadoop YARN divides compute resources into queues. With Queue ACLs, it can identify and control which users/groups can take actions on particular queues. Similarly, HDFS ACLs control access of HDFS files by providing a way to set different permissions for specific users/groups.

Apache Spark is a unified analytics engine for large-scale data processing. It provides a Distributed SQL Engine, a.k.a, the Spark Thrift Server(STS), designed to be seamlessly compatible with HiveServer2 and get even better performance.

HiveServer2 can identify and authenticate a caller, and then if the caller also has permissions for the YARN queue and HDFS files, it succeeds. Otherwise, it fails. However, on the one hand, STS is a single Spark application. The user and queue to which STS belongs are uniquely determined at startup. Consequently, STS cannot leverage cluster managers such as YARN and Kubernetes for resource isolation and sharing or control the access for callers by the single user inside the whole system. On the other hand, the Thrift Server is coupled in the Spark driver's JVM process. This coupled architect puts a high risk on server stability and makes it unable to handle high client concurrency or apply high availability such as load balancing as it is stateful.

Kyuubi extends the use of STS in a multi-tenant model based on a unified interface and relies on the concept of multi-tenancy to interact with cluster managers to finally gain the ability of resources sharing/isolation and data security. The loosely coupled architecture of the Kyuubi server and engine dramatically improves the client concurrency and service stability of the service itself.

DataLake/LakeHouse Support

The vision of Kyuubi is to unify the portal and become an easy-to-use data lake management platform. Different kinds of workloads, such as ETL processing and BI analytics, can be supported by one platform, using one copy of data, with one SQL interface.

  • Logical View support via Kyuubi DataLake Metadata APIs
  • Multiple Catalogs support
  • SQL Standard Authorization support for DataLake(coming)

Cloud Native Support

Kyuubi can deploy its engines on different kinds of Cluster Managers, such as, Hadoop YARN, Kubernetes, etc.

The Kyuubi Ecosystem(present and future)

The figure below shows our vision for the Kyuubi Ecosystem. Some of them have been realized, some in development, and others would not be possible without your help.

Online Documentation

Since Kyuubi 1.3.0-incubating, the Kyuubi online documentation is hosted by https://kyuubi.apache.org/. You can find the specific version of Kyuubi documentation as listed below.

For 1.2 and earlier versions, please check the Github Pages directly.

Quick Start

Ready? Getting Started with Kyuubi.

Contributing

All bits of help are welcome. You can make various types of contributions to Kyuubi, including the following but not limited to,

  • Help new users in chat channel or share your success stories with us - Gitter
  • Improve Documentation - Documentation Status
  • Test releases - Latest tag
  • Improve test coverage - codecov
  • Report bugs and better help developers to reproduce
  • Review changes
  • Make a pull request
  • Promote to others
  • Click the star button if you like this project

Before you start, we recommend that you check the Contribution Guidelines first.

Aside

The project took its name from a character of a popular Japanese manga - Naruto. The character is named Kyuubi Kitsune/Kurama, which is a nine-tailed fox in mythology. Kyuubi spread the power and spirit of fire, which is used here to represent the powerful Apache Spark. Its nine tails stand for end-to-end multi-tenancy support of this project.

License

This project is licensed under the Apache 2.0 License. See the LICENSE file for details.