From e49ff8830567ae77d4d7c2360b244bf068eef63a Mon Sep 17 00:00:00 2001 From: Cheng Pan <379377944@qq.com> Date: Wed, 3 Mar 2021 22:04:32 +0800 Subject: [PATCH] [KYUUBI #390] Replace abbr in doc ![pan3793](https://badgen.net/badge/Hello/pan3793/green) [![Closes #390](https://badgen.net/badge/Preview/Closes%20%23390/blue)](https://github.com/yaooqinn/kyuubi/pull/390) ![20](https://badgen.net/badge/%2B/20/red) ![20](https://badgen.net/badge/-/20/green) ![1](https://badgen.net/badge/commits/1/yellow) ![Target Issue](https://badgen.net/badge/Missing/Target%20Issue/ff0000) ![Test Plan](https://badgen.net/badge/Missing/Test%20Plan/ff0000) [❨?❩](https://pullrequestbadge.com/?utm_medium=github&utm_source=yaooqinn&utm_campaign=badge_info) ### _Why are the changes needed?_ ### _How was this patch tested?_ - [ ] Add some test cases that check the changes thoroughly including negative and positive cases if possible - [ ] Add screenshots for manual tests if appropriate - [ ] [Run test](https://kyuubi.readthedocs.io/en/latest/tools/testing.html#running-tests) locally before make a pull request Closes #390 from pan3793/doc. 4b7cdf2 [Cheng Pan] replace abbr in doc Authored-by: Cheng Pan <379377944@qq.com> Signed-off-by: Kent Yao --- README.md | 2 +- docs/deployment/hive_metastore.md | 4 ++-- docs/deployment/settings.md | 2 +- docs/deployment/trouble_shooting.md | 2 +- docs/overview/kyuubi_vs_hive.md | 2 +- docs/quick_start/quick_start.md | 8 ++++---- docs/security/authentication.md | 8 ++++---- docs/security/kinit.md | 6 +++--- docs/tools/building.md | 4 ++-- docs/tools/debugging.md | 2 +- 10 files changed, 20 insertions(+), 20 deletions(-) diff --git a/README.md b/README.md index 87e79e57d..7d0058a25 100644 --- a/README.md +++ b/README.md @@ -53,7 +53,7 @@ Ready? [Getting Started](https://kyuubi.readthedocs.io/en/latest/quick_start/qui All bits of help are welcome. You can make various types of contributions to Kyuubi, including the following but not limited to, -- Help new users in chat channel or share your success stories w/ us - [![Gitter](https://badges.gitter.im/kyuubi-on-spark/Lobby.svg)](https://gitter.im/kyuubi-on-spark/Lobby?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge) +- Help new users in chat channel or share your success stories with us - [![Gitter](https://badges.gitter.im/kyuubi-on-spark/Lobby.svg)](https://gitter.im/kyuubi-on-spark/Lobby?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge) - Improve Documentation - [![Documentation Status](https://readthedocs.org/projects/kyuubi/badge/?version=latest)](https://kyuubi.readthedocs.io/en/latest/?badge=latest) - Test releases - [![GitHub release](https://img.shields.io/github/release/yaooqinn/kyuubi.svg)](https://github.com/yaooqinn/kyuubi/releases) - Improve test coverage - [![codecov](https://codecov.io/gh/yaooqinn/kyuubi/branch/master/graph/badge.svg)](https://codecov.io/gh/yaooqinn/kyuubi) diff --git a/docs/deployment/hive_metastore.md b/docs/deployment/hive_metastore.md index 912914cb0..a52fb122f 100644 --- a/docs/deployment/hive_metastore.md +++ b/docs/deployment/hive_metastore.md @@ -4,7 +4,7 @@ -# Integration w/ Hive Metastore +# Integration with Hive Metastore In this section, you will learn how to configure Kyuubi to interact with Hive Metastore. @@ -110,7 +110,7 @@ hive.metastore.uris | thrift://<host>:<port>,thrift://<host1>: ### Via kyuubi-defaults.conf In `$KYUUBI_HOME/conf/kyuubi-defaults.conf`, all _**Hive primitive configurations**_, e.g. `hive.metastore.uris`, -and the **_Spark derivatives_**, which are prefixed w/ `spark.hive.` or `spark.hadoop.`, e.g `spark.hive.metastore.uris` or `spark.hadoop.hive.metastore.uris`, +and the **_Spark derivatives_**, which are prefixed with `spark.hive.` or `spark.hadoop.`, e.g `spark.hive.metastore.uris` or `spark.hadoop.hive.metastore.uris`, will be loaded as Hive primitives by the Hive client inside the Spark application. Kyuubi will take these configurations as system wide defaults for all applications it launches. diff --git a/docs/deployment/settings.md b/docs/deployment/settings.md index a86b12931..f144b09b6 100644 --- a/docs/deployment/settings.md +++ b/docs/deployment/settings.md @@ -387,5 +387,5 @@ ___bob___.spark.master=spark://master:7077 ___bob___.spark.executor.memory=8g ``` -In the above case, if there are related configurations from [JDBC Connection URL](#via-jdbc-connection-url), `kent` will run his SQL engine application on YARN and prefer the Spark AQE to be off, while `bob` will activate his SQL engine application on a Spark standalone cluster w/ 8g heap memory for each executor and obey the Spark AQE behavior of Kyuubi system default. On the other hand, for those users who do not have custom configurations will use system defaults. +In the above case, if there are related configurations from [JDBC Connection URL](#via-jdbc-connection-url), `kent` will run his SQL engine application on YARN and prefer the Spark AQE to be off, while `bob` will activate his SQL engine application on a Spark standalone cluster with 8g heap memory for each executor and obey the Spark AQE behavior of Kyuubi system default. On the other hand, for those users who do not have custom configurations will use system defaults. diff --git a/docs/deployment/trouble_shooting.md b/docs/deployment/trouble_shooting.md index 7f1517b94..6916e8ea0 100644 --- a/docs/deployment/trouble_shooting.md +++ b/docs/deployment/trouble_shooting.md @@ -45,7 +45,7 @@ Hive 2.3.7 Build flags: ``` -To fix this problem you should export `JAVA_HOME` w/ a compatible one in `conf/kyuubi-env.sh` +To fix this problem you should export `JAVA_HOME` with a compatible one in `conf/kyuubi-env.sh` ```bash echo "export JAVA_HOME=/path/to/jdk1.8.0_251" >> conf/kyuubi-env.sh diff --git a/docs/overview/kyuubi_vs_hive.md b/docs/overview/kyuubi_vs_hive.md index ad7700e23..320e24767 100644 --- a/docs/overview/kyuubi_vs_hive.md +++ b/docs/overview/kyuubi_vs_hive.md @@ -26,7 +26,7 @@ have multiple reducer stages. ** Optimizer ** | Spark SQL Catalyst | Hive Optimizer ** Engine ** | up to Spark 3.x | MapReduce/[up to Spark 2.3](https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started#HiveonSpark:GettingStarted-VersionCompatibility)/Tez ** Performance ** | High | Low -** Compatibility w/ Spark ** | Good | Bad(need to rebuild on a specific version) +** Compatibility with Spark ** | Good | Bad(need to rebuild on a specific version) ** Data Types ** | [Spark Data Types](http://spark.apache.org/docs/latest/sql-ref-datatypes.html) | [Hive Data Types](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types) diff --git a/docs/quick_start/quick_start.md b/docs/quick_start/quick_start.md index ec13ed246..3d9bcd421 100644 --- a/docs/quick_start/quick_start.md +++ b/docs/quick_start/quick_start.md @@ -20,13 +20,13 @@ These are essential components required for Kyuubi to startup. For quick start d Components| Role | Optional | Version | Remarks --- | --- | --- | --- | --- -Java | Java
Runtime
Environment | Required | 1.8 | Kyuubi is pre-built w/ Java 1.8 +Java | Java
Runtime
Environment | Required | 1.8 | Kyuubi is pre-built with Java 1.8 Spark | Distribute
SQL
Engine | Optional | 3.0.x | By default Kyuubi is pre-built w/
a Apache Spark release inside at
`$KYUUBI_HOME/externals` -HDFS | Distributed
File
System | Optional | referenced
by
Spark | Hadoop Distributed File System is a
part of Hadoop framework, used to
store and process the datasets.
You can interact w/ any
Spark-compatible versions of HDFS. +HDFS | Distributed
File
System | Optional | referenced
by
Spark | Hadoop Distributed File System is a
part of Hadoop framework, used to
store and process the datasets.
You can interact with any
Spark-compatible versions of HDFS. Hive | Metastore | Optional | referenced
by
Spark | Hive Metastore for Spark SQL to connect Zookeeper | Service
Discovery | Optional | Any
zookeeper
ensemble
compatible
with
curator(2.7.1) | By default, Kyuubi provides a
embeded Zookeeper server inside for
non-production use. -Additionally, if you want to work with other Spark compatible systems or plugins, you only need to take care of them as using them w/ regular Spark applications. For example, you can run Spark SQL engines created by the Kyuubi on any kind of cluster manager, including YARN, Kubernetes, Mesos, e.t.c... Or, you can manipulate data from different data sources w/ the Spark Datasource API, e.g. Delta Lake, Apache Hudi, Apache Iceberg, Apache Kudu and e.t.c... +Additionally, if you want to work with other Spark compatible systems or plugins, you only need to take care of them as using them with regular Spark applications. For example, you can run Spark SQL engines created by the Kyuubi on any kind of cluster manager, including YARN, Kubernetes, Mesos, e.t.c... Or, you can manipulate data from different data sources with the Spark Datasource API, e.g. Delta Lake, Apache Hudi, Apache Iceberg, Apache Kudu and e.t.c... ## Installation @@ -61,7 +61,7 @@ From top to bottom are: - LICENSE: the [APACHE LICENSE, VERSION 2.0](https://www.apache.org/licenses/LICENSE-2.0) we claim to obey. - RELEASE: the build information of this package -- bin: the entry of the Kyuubi server w/ `kyuubi` as the startup script. +- bin: the entry of the Kyuubi server with `kyuubi` as the startup script. - conf: all the defaults used by Kyuubi Server itself or creating session with Spark applications. - externals - engines: contains all kinds of SQL engines that we support, e.g. Apache Spark, Apache Flink(coming soon). diff --git a/docs/security/authentication.md b/docs/security/authentication.md index 8301b605b..af1c0f045 100644 --- a/docs/security/authentication.md +++ b/docs/security/authentication.md @@ -12,7 +12,7 @@ As the fact that the user claims does not necessarily mean this is true. The authentication process of Kyuubi is used to verify the user identity that a client used to talk to the Kyuubi server. Once done, a trusted connection will be set up between the client and server if successful; otherwise, rejected. -**Note** that, this authentication only authenticate whether a user can connect w/ Kyuubi server or not. +**Note** that, this authentication only authenticate whether a user can connect with Kyuubi server or not. For other secured services that this user wants to interact with, he/she also needs to pass the authentication process of each service, for instance, Hive Metastore, YARN, HDFS. In `$KYUUBI_HOME/conf/kyuubi-defaults.conf`, specify `kyuubi.authentication` to one of the authentication types listing below. @@ -33,7 +33,7 @@ kyuubi\.authentication
\.sasl\.qop|