[KYUUBI #390] Replace abbr in doc
 [](https://github.com/yaooqinn/kyuubi/pull/390)      [❨?❩](https://pullrequestbadge.com/?utm_medium=github&utm_source=yaooqinn&utm_campaign=badge_info)<!-- PR-BADGE: PLEASE DO NOT REMOVE THIS COMMENT --> <!-- Thanks for sending a pull request! Here are some tips for you: 1. If this is your first time, please read our contributor guidelines: https://kyuubi.readthedocs.io/en/latest/community/contributions.html 2. If the PR is related to an issue in https://github.com/yaooqinn/kyuubi/issues, add '[KYUUBI #XXXX]' in your PR title, e.g., '[KYUUBI #XXXX] Your PR title ...'. 3. If the PR is unfinished, add '[WIP]' in your PR title, e.g., '[WIP][KYUUBI #XXXX] Your PR title ...'. --> ### _Why are the changes needed?_ <!-- Please clarify why the changes are needed. For instance, 1. If you add a feature, you can talk about the use case of it. 2. If you fix a bug, you can clarify why it is a bug. --> ### _How was this patch tested?_ - [ ] Add some test cases that check the changes thoroughly including negative and positive cases if possible - [ ] Add screenshots for manual tests if appropriate - [ ] [Run test](https://kyuubi.readthedocs.io/en/latest/tools/testing.html#running-tests) locally before make a pull request Closes #390 from pan3793/doc. 4b7cdf2 [Cheng Pan] replace abbr in doc Authored-by: Cheng Pan <379377944@qq.com> Signed-off-by: Kent Yao <yao@apache.org>
This commit is contained in:
parent
f83faa8356
commit
e49ff88305
@ -53,7 +53,7 @@ Ready? [Getting Started](https://kyuubi.readthedocs.io/en/latest/quick_start/qui
|
||||
|
||||
All bits of help are welcome. You can make various types of contributions to Kyuubi, including the following but not limited to,
|
||||
|
||||
- Help new users in chat channel or share your success stories w/ us - [](https://gitter.im/kyuubi-on-spark/Lobby?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
|
||||
- Help new users in chat channel or share your success stories with us - [](https://gitter.im/kyuubi-on-spark/Lobby?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
|
||||
- Improve Documentation - [](https://kyuubi.readthedocs.io/en/latest/?badge=latest)
|
||||
- Test releases - [](https://github.com/yaooqinn/kyuubi/releases)
|
||||
- Improve test coverage - [](https://codecov.io/gh/yaooqinn/kyuubi)
|
||||
|
||||
@ -4,7 +4,7 @@
|
||||
|
||||
</div>
|
||||
|
||||
# Integration w/ Hive Metastore
|
||||
# Integration with Hive Metastore
|
||||
|
||||
In this section, you will learn how to configure Kyuubi to interact with Hive Metastore.
|
||||
|
||||
@ -110,7 +110,7 @@ hive.metastore.uris | thrift://<host>:<port>,thrift://<host1>:
|
||||
### Via kyuubi-defaults.conf
|
||||
|
||||
In `$KYUUBI_HOME/conf/kyuubi-defaults.conf`, all _**Hive primitive configurations**_, e.g. `hive.metastore.uris`,
|
||||
and the **_Spark derivatives_**, which are prefixed w/ `spark.hive.` or `spark.hadoop.`, e.g `spark.hive.metastore.uris` or `spark.hadoop.hive.metastore.uris`,
|
||||
and the **_Spark derivatives_**, which are prefixed with `spark.hive.` or `spark.hadoop.`, e.g `spark.hive.metastore.uris` or `spark.hadoop.hive.metastore.uris`,
|
||||
will be loaded as Hive primitives by the Hive client inside the Spark application.
|
||||
|
||||
Kyuubi will take these configurations as system wide defaults for all applications it launches.
|
||||
|
||||
@ -387,5 +387,5 @@ ___bob___.spark.master=spark://master:7077
|
||||
___bob___.spark.executor.memory=8g
|
||||
```
|
||||
|
||||
In the above case, if there are related configurations from [JDBC Connection URL](#via-jdbc-connection-url), `kent` will run his SQL engine application on YARN and prefer the Spark AQE to be off, while `bob` will activate his SQL engine application on a Spark standalone cluster w/ 8g heap memory for each executor and obey the Spark AQE behavior of Kyuubi system default. On the other hand, for those users who do not have custom configurations will use system defaults.
|
||||
In the above case, if there are related configurations from [JDBC Connection URL](#via-jdbc-connection-url), `kent` will run his SQL engine application on YARN and prefer the Spark AQE to be off, while `bob` will activate his SQL engine application on a Spark standalone cluster with 8g heap memory for each executor and obey the Spark AQE behavior of Kyuubi system default. On the other hand, for those users who do not have custom configurations will use system defaults.
|
||||
|
||||
|
||||
@ -45,7 +45,7 @@ Hive 2.3.7
|
||||
Build flags:
|
||||
```
|
||||
|
||||
To fix this problem you should export `JAVA_HOME` w/ a compatible one in `conf/kyuubi-env.sh`
|
||||
To fix this problem you should export `JAVA_HOME` with a compatible one in `conf/kyuubi-env.sh`
|
||||
|
||||
```bash
|
||||
echo "export JAVA_HOME=/path/to/jdk1.8.0_251" >> conf/kyuubi-env.sh
|
||||
|
||||
@ -26,7 +26,7 @@ have multiple reducer stages.
|
||||
** Optimizer ** | Spark SQL Catalyst | Hive Optimizer
|
||||
** Engine ** | up to Spark 3.x | MapReduce/[up to Spark 2.3](https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started#HiveonSpark:GettingStarted-VersionCompatibility)/Tez
|
||||
** Performance ** | High | Low
|
||||
** Compatibility w/ Spark ** | Good | Bad(need to rebuild on a specific version)
|
||||
** Compatibility with Spark ** | Good | Bad(need to rebuild on a specific version)
|
||||
** Data Types ** | [Spark Data Types](http://spark.apache.org/docs/latest/sql-ref-datatypes.html) | [Hive Data Types](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types)
|
||||
|
||||
|
||||
|
||||
@ -20,13 +20,13 @@ These are essential components required for Kyuubi to startup. For quick start d
|
||||
|
||||
Components| Role | Optional | Version | Remarks
|
||||
--- | --- | --- | --- | ---
|
||||
Java | Java<br>Runtime<br>Environment | Required | 1.8 | Kyuubi is pre-built w/ Java 1.8
|
||||
Java | Java<br>Runtime<br>Environment | Required | 1.8 | Kyuubi is pre-built with Java 1.8
|
||||
Spark | Distribute<br>SQL<br>Engine | Optional | 3.0.x | By default Kyuubi is pre-built w/<br> a Apache Spark release inside at<br> `$KYUUBI_HOME/externals`
|
||||
HDFS | Distributed<br>File<br>System | Optional | referenced<br>by<br>Spark | Hadoop Distributed File System is a <br>part of Hadoop framework, used to<br> store and process the datasets.<br> You can interact w/ any<br> Spark-compatible versions of HDFS.
|
||||
HDFS | Distributed<br>File<br>System | Optional | referenced<br>by<br>Spark | Hadoop Distributed File System is a <br>part of Hadoop framework, used to<br> store and process the datasets.<br> You can interact with any<br> Spark-compatible versions of HDFS.
|
||||
Hive | Metastore | Optional | referenced<br>by<br>Spark | Hive Metastore for Spark SQL to connect
|
||||
Zookeeper | Service<br>Discovery | Optional | Any<br>zookeeper<br>ensemble<br>compatible<br>with<br>curator(2.7.1) | By default, Kyuubi provides a<br> embeded Zookeeper server inside for<br> non-production use.
|
||||
|
||||
Additionally, if you want to work with other Spark compatible systems or plugins, you only need to take care of them as using them w/ regular Spark applications. For example, you can run Spark SQL engines created by the Kyuubi on any kind of cluster manager, including YARN, Kubernetes, Mesos, e.t.c... Or, you can manipulate data from different data sources w/ the Spark Datasource API, e.g. Delta Lake, Apache Hudi, Apache Iceberg, Apache Kudu and e.t.c...
|
||||
Additionally, if you want to work with other Spark compatible systems or plugins, you only need to take care of them as using them with regular Spark applications. For example, you can run Spark SQL engines created by the Kyuubi on any kind of cluster manager, including YARN, Kubernetes, Mesos, e.t.c... Or, you can manipulate data from different data sources with the Spark Datasource API, e.g. Delta Lake, Apache Hudi, Apache Iceberg, Apache Kudu and e.t.c...
|
||||
|
||||
## Installation
|
||||
|
||||
@ -61,7 +61,7 @@ From top to bottom are:
|
||||
|
||||
- LICENSE: the [APACHE LICENSE, VERSION 2.0](https://www.apache.org/licenses/LICENSE-2.0) we claim to obey.
|
||||
- RELEASE: the build information of this package
|
||||
- bin: the entry of the Kyuubi server w/ `kyuubi` as the startup script.
|
||||
- bin: the entry of the Kyuubi server with `kyuubi` as the startup script.
|
||||
- conf: all the defaults used by Kyuubi Server itself or creating session with Spark applications.
|
||||
- externals
|
||||
- engines: contains all kinds of SQL engines that we support, e.g. Apache Spark, Apache Flink(coming soon).
|
||||
|
||||
@ -12,7 +12,7 @@ As the fact that the user claims does not necessarily mean this is true.
|
||||
The authentication process of Kyuubi is used to verify the user identity that a client used to talk to the Kyuubi server.
|
||||
Once done, a trusted connection will be set up between the client and server if successful; otherwise, rejected.
|
||||
|
||||
**Note** that, this authentication only authenticate whether a user can connect w/ Kyuubi server or not.
|
||||
**Note** that, this authentication only authenticate whether a user can connect with Kyuubi server or not.
|
||||
For other secured services that this user wants to interact with, he/she also needs to pass the authentication process of each service, for instance, Hive Metastore, YARN, HDFS.
|
||||
|
||||
In `$KYUUBI_HOME/conf/kyuubi-defaults.conf`, specify `kyuubi.authentication` to one of the authentication types listing below.
|
||||
@ -33,7 +33,7 @@ kyuubi\.authentication<br>\.sasl\.qop|<div style='width: 80pt;word-wrap: break-w
|
||||
|
||||
#### Using KERBEROS
|
||||
|
||||
If you are deploying Kyuubi w/ a kerberized Hadoop cluster, it is strongly recommended that `kyuubi.authentication` should be set to `KERBEROS` too.
|
||||
If you are deploying Kyuubi with a kerberized Hadoop cluster, it is strongly recommended that `kyuubi.authentication` should be set to `KERBEROS` too.
|
||||
|
||||
Kerberos is a network authentication protocol that provides the tools of authentication and strong cryptography over the network.
|
||||
The Kerberos protocol uses strong cryptography so that a client or a server can prove its identity to its server or client across an insecure network connection.
|
||||
@ -53,7 +53,7 @@ kyuubi\.kinit\.keytab|<div style='width: 80pt;word-wrap: break-word;white-space:
|
||||
|
||||
For example,
|
||||
|
||||
- Configure w/ Kyuubi service principal
|
||||
- Configure with Kyuubi service principal
|
||||
```bash
|
||||
kyuubi.authentication=KERBEROS
|
||||
kyuubi.kinit.principal=spark/kyuubi.apache.org@KYUUBI.APACHE.ORG
|
||||
@ -65,7 +65,7 @@ kyuubi.kinit.keytab=/path/to/kyuuib.keytab
|
||||
$ ./bin/kyuubi start
|
||||
```
|
||||
|
||||
- Kinit w/ user principal and connect using beeline
|
||||
- Kinit with user principal and connect using beeline
|
||||
|
||||
```bash
|
||||
$ kinit -kt user.keytab user.principal
|
||||
|
||||
@ -7,8 +7,8 @@
|
||||
|
||||
# Kinit Auxiliary Service
|
||||
|
||||
In order to work w/ a kerberos-enabled cluster, Kyuubi provides this kinit auxiliary service.
|
||||
It will periodically re-kinit w/ to keep the Ticket Cache fresh.
|
||||
In order to work with a kerberos-enabled cluster, Kyuubi provides this kinit auxiliary service.
|
||||
It will periodically re-kinit with to keep the Ticket Cache fresh.
|
||||
|
||||
|
||||
## Installing and Configuring the Kerberos Clients
|
||||
@ -31,7 +31,7 @@ Replace or configure `krb5.conf` to point to the KDC.
|
||||
## Kerberos Ticket
|
||||
|
||||
Kerberos client is aimed to generate a Ticket Cache file.
|
||||
Then, Kyuubi can use this Ticket Cache to authenticate w/ those kerberized services,
|
||||
Then, Kyuubi can use this Ticket Cache to authenticate with those kerberized services,
|
||||
e.g. HDFS, YARN, and Hive Metastore server, etc.
|
||||
|
||||
A Kerberos ticket cache contains a service and a client principal names,
|
||||
|
||||
@ -14,7 +14,7 @@
|
||||
./build/mvn clean package -DskipTests
|
||||
```
|
||||
|
||||
This results in the creation of all sub-modules of Kyuubi project w/o running any unit test.
|
||||
This results in the creation of all sub-modules of Kyuubi project without running any unit test.
|
||||
|
||||
If you want to test it manually, you can start Kyuubi directly from the Kyuubi project root by running
|
||||
|
||||
@ -32,7 +32,7 @@ build/mvn clean package -pl :kyuubi-common -DskipTests
|
||||
|
||||
## Skipping Some modules
|
||||
|
||||
For instance, you can build the Kyuubi modules w/o Kyuubi Codecov and Assembly modules using:
|
||||
For instance, you can build the Kyuubi modules without Kyuubi Codecov and Assembly modules using:
|
||||
|
||||
```bash
|
||||
mvn clean install -pl '!:kyuubi-codecov,!:kyuubi-assembly' -DskipTests
|
||||
|
||||
@ -7,7 +7,7 @@
|
||||
# Debugging Kyuubi
|
||||
|
||||
You can use the [Java Debug Wire Protocol](https://docs.oracle.com/javase/8/docs/technotes/guides/jpda/conninv.html#Plugin) to debug Kyuubi
|
||||
w/ your favorite IDE tool, e.g. Intellij IDEA.
|
||||
with your favorite IDE tool, e.g. Intellij IDEA.
|
||||
|
||||
## Debugging Server
|
||||
|
||||
|
||||
Loading…
Reference in New Issue
Block a user