[KYUUBI #390] Replace abbr in doc

![pan3793](https://badgen.net/badge/Hello/pan3793/green) [![Closes #390](https://badgen.net/badge/Preview/Closes%20%23390/blue)](https://github.com/yaooqinn/kyuubi/pull/390) ![20](https://badgen.net/badge/%2B/20/red) ![20](https://badgen.net/badge/-/20/green) ![1](https://badgen.net/badge/commits/1/yellow) ![Target Issue](https://badgen.net/badge/Missing/Target%20Issue/ff0000) ![Test Plan](https://badgen.net/badge/Missing/Test%20Plan/ff0000) [&#10088;?&#10089;](https://pullrequestbadge.com/?utm_medium=github&utm_source=yaooqinn&utm_campaign=badge_info)<!-- PR-BADGE: PLEASE DO NOT REMOVE THIS COMMENT -->

<!--
Thanks for sending a pull request!

Here are some tips for you:
  1. If this is your first time, please read our contributor guidelines: https://kyuubi.readthedocs.io/en/latest/community/contributions.html
  2. If the PR is related to an issue in https://github.com/yaooqinn/kyuubi/issues, add '[KYUUBI #XXXX]' in your PR title, e.g., '[KYUUBI #XXXX] Your PR title ...'.
  3. If the PR is unfinished, add '[WIP]' in your PR title, e.g., '[WIP][KYUUBI #XXXX] Your PR title ...'.
-->

### _Why are the changes needed?_
<!--
Please clarify why the changes are needed. For instance,
  1. If you add a feature, you can talk about the use case of it.
  2. If you fix a bug, you can clarify why it is a bug.
-->

### _How was this patch tested?_
- [ ] Add some test cases that check the changes thoroughly including negative and positive cases if possible

- [ ] Add screenshots for manual tests if appropriate

- [ ] [Run test](https://kyuubi.readthedocs.io/en/latest/tools/testing.html#running-tests) locally before make a pull request

Closes #390 from pan3793/doc.

4b7cdf2 [Cheng Pan] replace abbr in doc

Authored-by: Cheng Pan <379377944@qq.com>
Signed-off-by: Kent Yao <yao@apache.org>
This commit is contained in:
Cheng Pan 2021-03-03 22:04:32 +08:00 committed by Kent Yao
parent f83faa8356
commit e49ff88305
No known key found for this signature in database
GPG Key ID: F7051850A0AF904D
10 changed files with 20 additions and 20 deletions

View File

@ -53,7 +53,7 @@ Ready? [Getting Started](https://kyuubi.readthedocs.io/en/latest/quick_start/qui
All bits of help are welcome. You can make various types of contributions to Kyuubi, including the following but not limited to, All bits of help are welcome. You can make various types of contributions to Kyuubi, including the following but not limited to,
- Help new users in chat channel or share your success stories w/ us - [![Gitter](https://badges.gitter.im/kyuubi-on-spark/Lobby.svg)](https://gitter.im/kyuubi-on-spark/Lobby?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge) - Help new users in chat channel or share your success stories with us - [![Gitter](https://badges.gitter.im/kyuubi-on-spark/Lobby.svg)](https://gitter.im/kyuubi-on-spark/Lobby?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
- Improve Documentation - [![Documentation Status](https://readthedocs.org/projects/kyuubi/badge/?version=latest)](https://kyuubi.readthedocs.io/en/latest/?badge=latest) - Improve Documentation - [![Documentation Status](https://readthedocs.org/projects/kyuubi/badge/?version=latest)](https://kyuubi.readthedocs.io/en/latest/?badge=latest)
- Test releases - [![GitHub release](https://img.shields.io/github/release/yaooqinn/kyuubi.svg)](https://github.com/yaooqinn/kyuubi/releases) - Test releases - [![GitHub release](https://img.shields.io/github/release/yaooqinn/kyuubi.svg)](https://github.com/yaooqinn/kyuubi/releases)
- Improve test coverage - [![codecov](https://codecov.io/gh/yaooqinn/kyuubi/branch/master/graph/badge.svg)](https://codecov.io/gh/yaooqinn/kyuubi) - Improve test coverage - [![codecov](https://codecov.io/gh/yaooqinn/kyuubi/branch/master/graph/badge.svg)](https://codecov.io/gh/yaooqinn/kyuubi)

View File

@ -4,7 +4,7 @@
</div> </div>
# Integration w/ Hive Metastore # Integration with Hive Metastore
In this section, you will learn how to configure Kyuubi to interact with Hive Metastore. In this section, you will learn how to configure Kyuubi to interact with Hive Metastore.
@ -110,7 +110,7 @@ hive.metastore.uris | thrift://&lt;host&gt;:&lt;port&gt;,thrift://&lt;host1&gt;:
### Via kyuubi-defaults.conf ### Via kyuubi-defaults.conf
In `$KYUUBI_HOME/conf/kyuubi-defaults.conf`, all _**Hive primitive configurations**_, e.g. `hive.metastore.uris`, In `$KYUUBI_HOME/conf/kyuubi-defaults.conf`, all _**Hive primitive configurations**_, e.g. `hive.metastore.uris`,
and the **_Spark derivatives_**, which are prefixed w/ `spark.hive.` or `spark.hadoop.`, e.g `spark.hive.metastore.uris` or `spark.hadoop.hive.metastore.uris`, and the **_Spark derivatives_**, which are prefixed with `spark.hive.` or `spark.hadoop.`, e.g `spark.hive.metastore.uris` or `spark.hadoop.hive.metastore.uris`,
will be loaded as Hive primitives by the Hive client inside the Spark application. will be loaded as Hive primitives by the Hive client inside the Spark application.
Kyuubi will take these configurations as system wide defaults for all applications it launches. Kyuubi will take these configurations as system wide defaults for all applications it launches.

View File

@ -387,5 +387,5 @@ ___bob___.spark.master=spark://master:7077
___bob___.spark.executor.memory=8g ___bob___.spark.executor.memory=8g
``` ```
In the above case, if there are related configurations from [JDBC Connection URL](#via-jdbc-connection-url), `kent` will run his SQL engine application on YARN and prefer the Spark AQE to be off, while `bob` will activate his SQL engine application on a Spark standalone cluster w/ 8g heap memory for each executor and obey the Spark AQE behavior of Kyuubi system default. On the other hand, for those users who do not have custom configurations will use system defaults. In the above case, if there are related configurations from [JDBC Connection URL](#via-jdbc-connection-url), `kent` will run his SQL engine application on YARN and prefer the Spark AQE to be off, while `bob` will activate his SQL engine application on a Spark standalone cluster with 8g heap memory for each executor and obey the Spark AQE behavior of Kyuubi system default. On the other hand, for those users who do not have custom configurations will use system defaults.

View File

@ -45,7 +45,7 @@ Hive 2.3.7
Build flags: Build flags:
``` ```
To fix this problem you should export `JAVA_HOME` w/ a compatible one in `conf/kyuubi-env.sh` To fix this problem you should export `JAVA_HOME` with a compatible one in `conf/kyuubi-env.sh`
```bash ```bash
echo "export JAVA_HOME=/path/to/jdk1.8.0_251" >> conf/kyuubi-env.sh echo "export JAVA_HOME=/path/to/jdk1.8.0_251" >> conf/kyuubi-env.sh

View File

@ -26,7 +26,7 @@ have multiple reducer stages.
** Optimizer ** | Spark SQL Catalyst | Hive Optimizer ** Optimizer ** | Spark SQL Catalyst | Hive Optimizer
** Engine ** | up to Spark 3.x | MapReduce/[up to Spark 2.3](https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started#HiveonSpark:GettingStarted-VersionCompatibility)/Tez ** Engine ** | up to Spark 3.x | MapReduce/[up to Spark 2.3](https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started#HiveonSpark:GettingStarted-VersionCompatibility)/Tez
** Performance ** | High | Low ** Performance ** | High | Low
** Compatibility w/ Spark ** | Good | Bad(need to rebuild on a specific version) ** Compatibility with Spark ** | Good | Bad(need to rebuild on a specific version)
** Data Types ** | [Spark Data Types](http://spark.apache.org/docs/latest/sql-ref-datatypes.html) | [Hive Data Types](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types) ** Data Types ** | [Spark Data Types](http://spark.apache.org/docs/latest/sql-ref-datatypes.html) | [Hive Data Types](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types)

View File

@ -20,13 +20,13 @@ These are essential components required for Kyuubi to startup. For quick start d
Components| Role | Optional | Version | Remarks Components| Role | Optional | Version | Remarks
--- | --- | --- | --- | --- --- | --- | --- | --- | ---
Java | Java<br>Runtime<br>Environment | Required | 1.8 | Kyuubi is pre-built w/ Java 1.8 Java | Java<br>Runtime<br>Environment | Required | 1.8 | Kyuubi is pre-built with Java 1.8
Spark | Distribute<br>SQL<br>Engine | Optional | 3.0.x | By default Kyuubi is pre-built w/<br> a Apache Spark release inside at<br> `$KYUUBI_HOME/externals` Spark | Distribute<br>SQL<br>Engine | Optional | 3.0.x | By default Kyuubi is pre-built w/<br> a Apache Spark release inside at<br> `$KYUUBI_HOME/externals`
HDFS | Distributed<br>File<br>System | Optional | referenced<br>by<br>Spark | Hadoop Distributed File System is a <br>part of Hadoop framework, used to<br> store and process the datasets.<br> You can interact w/ any<br> Spark-compatible versions of HDFS. HDFS | Distributed<br>File<br>System | Optional | referenced<br>by<br>Spark | Hadoop Distributed File System is a <br>part of Hadoop framework, used to<br> store and process the datasets.<br> You can interact with any<br> Spark-compatible versions of HDFS.
Hive | Metastore | Optional | referenced<br>by<br>Spark | Hive Metastore for Spark SQL to connect Hive | Metastore | Optional | referenced<br>by<br>Spark | Hive Metastore for Spark SQL to connect
Zookeeper | Service<br>Discovery | Optional | Any<br>zookeeper<br>ensemble<br>compatible<br>with<br>curator(2.7.1) | By default, Kyuubi provides a<br> embeded Zookeeper server inside for<br> non-production use. Zookeeper | Service<br>Discovery | Optional | Any<br>zookeeper<br>ensemble<br>compatible<br>with<br>curator(2.7.1) | By default, Kyuubi provides a<br> embeded Zookeeper server inside for<br> non-production use.
Additionally, if you want to work with other Spark compatible systems or plugins, you only need to take care of them as using them w/ regular Spark applications. For example, you can run Spark SQL engines created by the Kyuubi on any kind of cluster manager, including YARN, Kubernetes, Mesos, e.t.c... Or, you can manipulate data from different data sources w/ the Spark Datasource API, e.g. Delta Lake, Apache Hudi, Apache Iceberg, Apache Kudu and e.t.c... Additionally, if you want to work with other Spark compatible systems or plugins, you only need to take care of them as using them with regular Spark applications. For example, you can run Spark SQL engines created by the Kyuubi on any kind of cluster manager, including YARN, Kubernetes, Mesos, e.t.c... Or, you can manipulate data from different data sources with the Spark Datasource API, e.g. Delta Lake, Apache Hudi, Apache Iceberg, Apache Kudu and e.t.c...
## Installation ## Installation
@ -61,7 +61,7 @@ From top to bottom are:
- LICENSE: the [APACHE LICENSE, VERSION 2.0](https://www.apache.org/licenses/LICENSE-2.0) we claim to obey. - LICENSE: the [APACHE LICENSE, VERSION 2.0](https://www.apache.org/licenses/LICENSE-2.0) we claim to obey.
- RELEASE: the build information of this package - RELEASE: the build information of this package
- bin: the entry of the Kyuubi server w/ `kyuubi` as the startup script. - bin: the entry of the Kyuubi server with `kyuubi` as the startup script.
- conf: all the defaults used by Kyuubi Server itself or creating session with Spark applications. - conf: all the defaults used by Kyuubi Server itself or creating session with Spark applications.
- externals - externals
- engines: contains all kinds of SQL engines that we support, e.g. Apache Spark, Apache Flink(coming soon). - engines: contains all kinds of SQL engines that we support, e.g. Apache Spark, Apache Flink(coming soon).

View File

@ -12,7 +12,7 @@ As the fact that the user claims does not necessarily mean this is true.
The authentication process of Kyuubi is used to verify the user identity that a client used to talk to the Kyuubi server. The authentication process of Kyuubi is used to verify the user identity that a client used to talk to the Kyuubi server.
Once done, a trusted connection will be set up between the client and server if successful; otherwise, rejected. Once done, a trusted connection will be set up between the client and server if successful; otherwise, rejected.
**Note** that, this authentication only authenticate whether a user can connect w/ Kyuubi server or not. **Note** that, this authentication only authenticate whether a user can connect with Kyuubi server or not.
For other secured services that this user wants to interact with, he/she also needs to pass the authentication process of each service, for instance, Hive Metastore, YARN, HDFS. For other secured services that this user wants to interact with, he/she also needs to pass the authentication process of each service, for instance, Hive Metastore, YARN, HDFS.
In `$KYUUBI_HOME/conf/kyuubi-defaults.conf`, specify `kyuubi.authentication` to one of the authentication types listing below. In `$KYUUBI_HOME/conf/kyuubi-defaults.conf`, specify `kyuubi.authentication` to one of the authentication types listing below.
@ -33,7 +33,7 @@ kyuubi\.authentication<br>\.sasl\.qop|<div style='width: 80pt;word-wrap: break-w
#### Using KERBEROS #### Using KERBEROS
If you are deploying Kyuubi w/ a kerberized Hadoop cluster, it is strongly recommended that `kyuubi.authentication` should be set to `KERBEROS` too. If you are deploying Kyuubi with a kerberized Hadoop cluster, it is strongly recommended that `kyuubi.authentication` should be set to `KERBEROS` too.
Kerberos is a network authentication protocol that provides the tools of authentication and strong cryptography over the network. Kerberos is a network authentication protocol that provides the tools of authentication and strong cryptography over the network.
The Kerberos protocol uses strong cryptography so that a client or a server can prove its identity to its server or client across an insecure network connection. The Kerberos protocol uses strong cryptography so that a client or a server can prove its identity to its server or client across an insecure network connection.
@ -53,7 +53,7 @@ kyuubi\.kinit\.keytab|<div style='width: 80pt;word-wrap: break-word;white-space:
For example, For example,
- Configure w/ Kyuubi service principal - Configure with Kyuubi service principal
```bash ```bash
kyuubi.authentication=KERBEROS kyuubi.authentication=KERBEROS
kyuubi.kinit.principal=spark/kyuubi.apache.org@KYUUBI.APACHE.ORG kyuubi.kinit.principal=spark/kyuubi.apache.org@KYUUBI.APACHE.ORG
@ -65,7 +65,7 @@ kyuubi.kinit.keytab=/path/to/kyuuib.keytab
$ ./bin/kyuubi start $ ./bin/kyuubi start
``` ```
- Kinit w/ user principal and connect using beeline - Kinit with user principal and connect using beeline
```bash ```bash
$ kinit -kt user.keytab user.principal $ kinit -kt user.keytab user.principal

View File

@ -7,8 +7,8 @@
# Kinit Auxiliary Service # Kinit Auxiliary Service
In order to work w/ a kerberos-enabled cluster, Kyuubi provides this kinit auxiliary service. In order to work with a kerberos-enabled cluster, Kyuubi provides this kinit auxiliary service.
It will periodically re-kinit w/ to keep the Ticket Cache fresh. It will periodically re-kinit with to keep the Ticket Cache fresh.
## Installing and Configuring the Kerberos Clients ## Installing and Configuring the Kerberos Clients
@ -31,7 +31,7 @@ Replace or configure `krb5.conf` to point to the KDC.
## Kerberos Ticket ## Kerberos Ticket
Kerberos client is aimed to generate a Ticket Cache file. Kerberos client is aimed to generate a Ticket Cache file.
Then, Kyuubi can use this Ticket Cache to authenticate w/ those kerberized services, Then, Kyuubi can use this Ticket Cache to authenticate with those kerberized services,
e.g. HDFS, YARN, and Hive Metastore server, etc. e.g. HDFS, YARN, and Hive Metastore server, etc.
A Kerberos ticket cache contains a service and a client principal names, A Kerberos ticket cache contains a service and a client principal names,

View File

@ -14,7 +14,7 @@
./build/mvn clean package -DskipTests ./build/mvn clean package -DskipTests
``` ```
This results in the creation of all sub-modules of Kyuubi project w/o running any unit test. This results in the creation of all sub-modules of Kyuubi project without running any unit test.
If you want to test it manually, you can start Kyuubi directly from the Kyuubi project root by running If you want to test it manually, you can start Kyuubi directly from the Kyuubi project root by running
@ -32,7 +32,7 @@ build/mvn clean package -pl :kyuubi-common -DskipTests
## Skipping Some modules ## Skipping Some modules
For instance, you can build the Kyuubi modules w/o Kyuubi Codecov and Assembly modules using: For instance, you can build the Kyuubi modules without Kyuubi Codecov and Assembly modules using:
```bash ```bash
mvn clean install -pl '!:kyuubi-codecov,!:kyuubi-assembly' -DskipTests mvn clean install -pl '!:kyuubi-codecov,!:kyuubi-assembly' -DskipTests

View File

@ -7,7 +7,7 @@
# Debugging Kyuubi # Debugging Kyuubi
You can use the [Java Debug Wire Protocol](https://docs.oracle.com/javase/8/docs/technotes/guides/jpda/conninv.html#Plugin) to debug Kyuubi You can use the [Java Debug Wire Protocol](https://docs.oracle.com/javase/8/docs/technotes/guides/jpda/conninv.html#Plugin) to debug Kyuubi
w/ your favorite IDE tool, e.g. Intellij IDEA. with your favorite IDE tool, e.g. Intellij IDEA.
## Debugging Server ## Debugging Server