[KYUUBI #6239] Rename beeline to kyuubi-beeline

# 🔍 Description

Discussion in mailing thread: https://lists.apache.org/thread/tnmz71o3rypy7qvs3899p3jkkq4xqb4r

I propose to rename the `bin/beeline` to `bin/kyuubi-beeline`, while for compatibility, we may still want to keep the alias `bin/beeline` for a while.

In a real Hadoop cluster, it’s likely to add `$HIVE_HOME/bin`, `$SPARK_HOME/bin`, `$KYUUBI_HOME/bin` to the `$PATH`, at the current state, when performing `beeline`, which one is called depends on the declaration order.

It does not matter for Spark’s `bin/beeline` because it’s a vanilla Hive BeeLine, but in Kyuubi, we have made some improvements based on vanilla Hive BeeLine, so the behavior is not exactly same as Hive’s BeeLine.

An identical name would solve this problem. And I saw some vendors[1] who shippes Kyuubi already have done the same thing.

[1] https://help.aliyun.com/zh/emr/emr-on-ecs/user-guide/connect-to-kyuubi

## Types of changes 🔖

- [ ] Bugfix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to change)

## Test Plan 🧪

Manual test.

```
$ bin/beeline -u 'jdbc:kyuubi://0.0.0.0:10009/'
Warning: beeline is deprecated and will be removed in the future, please use kyuubi-beeline instead.
Connecting to jdbc:kyuubi://0.0.0.0:10009/
Connected to: Spark SQL (version 3.4.1)
Driver: Kyuubi Project Hive JDBC Client (version 1.10.0-SNAPSHOT)
Beeline version 1.10.0-SNAPSHOT by Apache Kyuubi
0: jdbc:kyuubi://0.0.0.0:10009/>
```

```
$ bin/kyuubi-beeline -u 'jdbc:kyuubi://0.0.0.0:10009/'
Connecting to jdbc:kyuubi://0.0.0.0:10009/
Connected to: Spark SQL (version 3.4.1)
Driver: Kyuubi Project Hive JDBC Client (version 1.10.0-SNAPSHOT)
Beeline version 1.10.0-SNAPSHOT by Apache Kyuubi
0: jdbc:kyuubi://0.0.0.0:10009/>
```

---

# Checklist 📝

- [x] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html)

**Be nice. Be informative.**

Closes #6239 from pan3793/kyuubi-beeline.

Closes #6239

cec8f56e2 [Cheng Pan] docs
b3446baf1 [Cheng Pan] docs
46a115077 [Cheng Pan] Remove `bin/beeline` to `bin/kyuubi-beeline`

Authored-by: Cheng Pan <chengpan@apache.org>
Signed-off-by: Cheng Pan <chengpan@apache.org>
This commit is contained in:
Cheng Pan 2024-04-03 18:35:38 +08:00
parent a937ae760e
commit 96498844d3
No known key found for this signature in database
GPG Key ID: 8001952629BCC75D
16 changed files with 125 additions and 97 deletions

View File

@ -16,44 +16,10 @@
# limitations under the License.
#
## Kyuubi BeeLine Entrance
CLASS="org.apache.hive.beeline.KyuubiBeeLine"
echo "Warning: beeline is deprecated and will be removed in the future, please use kyuubi-beeline instead."
if [ -z "${KYUUBI_HOME}" ]; then
KYUUBI_HOME="$(cd "`dirname "$0"`"/..; pwd)"
fi
. "${KYUUBI_HOME}/bin/load-kyuubi-env.sh" -s
if [[ -z ${JAVA_HOME} ]]; then
echo "Error: JAVA_HOME IS NOT SET! CANNOT PROCEED."
exit 1
fi
RUNNER="${JAVA_HOME}/bin/java"
# Append jline option to enable the Beeline process to run in background.
if [[ ( ! $(ps -o stat= -p $$) =~ "+" ) && ! ( -p /dev/stdin ) ]]; then
export KYUUBI_BEELINE_OPTS="$KYUUBI_BEELINE_OPTS -Djline.terminal=jline.UnsupportedTerminal"
fi
## Find the Kyuubi beeline Jar
if [[ -z "$KYUUBI_BEELINE_JARS" ]]; then
KYUUBI_BEELINE_JARS="$KYUUBI_HOME/beeline-jars"
if [[ ! -d ${KYUUBI_BEELINE_JARS} ]]; then
echo -e "\nCandidate Kyuubi beeline jars $KYUUBI_BEELINE_JARS doesn't exist, using $KYUUBI_HOME/kyuubi-hive-beeline/target/"
KYUUBI_BEELINE_JARS="$KYUUBI_HOME/kyuubi-hive-beeline/target"
fi
fi
if [[ -z ${YARN_CONF_DIR} ]]; then
KYUUBI_BEELINE_CLASSPATH="${KYUUBI_BEELINE_JARS}/*:${HADOOP_CONF_DIR}"
else
KYUUBI_BEELINE_CLASSPATH="${KYUUBI_BEELINE_JARS}/*:${HADOOP_CONF_DIR}:${YARN_CONF_DIR}"
fi
if [[ -f ${KYUUBI_CONF_DIR}/log4j2-repl.xml ]]; then
KYUUBI_CTL_JAVA_OPTS="${KYUUBI_CTL_JAVA_OPTS} -Dlog4j2.configurationFile=log4j2-repl.xml"
fi
exec ${RUNNER} ${KYUUBI_BEELINE_OPTS} -cp ${KYUUBI_BEELINE_CLASSPATH} $CLASS "$@"
exec "${KYUUBI_HOME}/bin/kyuubi-beeline" "$@"

59
bin/kyuubi-beeline Executable file
View File

@ -0,0 +1,59 @@
#!/usr/bin/env bash
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
## Kyuubi BeeLine Entrance
CLASS="org.apache.hive.beeline.KyuubiBeeLine"
if [ -z "${KYUUBI_HOME}" ]; then
KYUUBI_HOME="$(cd "`dirname "$0"`"/..; pwd)"
fi
. "${KYUUBI_HOME}/bin/load-kyuubi-env.sh" -s
if [[ -z ${JAVA_HOME} ]]; then
echo "Error: JAVA_HOME IS NOT SET! CANNOT PROCEED."
exit 1
fi
RUNNER="${JAVA_HOME}/bin/java"
# Append jline option to enable the Beeline process to run in background.
if [[ ( ! $(ps -o stat= -p $$) =~ "+" ) && ! ( -p /dev/stdin ) ]]; then
export KYUUBI_BEELINE_OPTS="$KYUUBI_BEELINE_OPTS -Djline.terminal=jline.UnsupportedTerminal"
fi
## Find the Kyuubi beeline Jar
if [[ -z "$KYUUBI_BEELINE_JARS" ]]; then
KYUUBI_BEELINE_JARS="$KYUUBI_HOME/beeline-jars"
if [[ ! -d ${KYUUBI_BEELINE_JARS} ]]; then
echo -e "\nCandidate Kyuubi beeline jars $KYUUBI_BEELINE_JARS doesn't exist, using $KYUUBI_HOME/kyuubi-hive-beeline/target/"
KYUUBI_BEELINE_JARS="$KYUUBI_HOME/kyuubi-hive-beeline/target"
fi
fi
if [[ -z ${YARN_CONF_DIR} ]]; then
KYUUBI_BEELINE_CLASSPATH="${KYUUBI_BEELINE_JARS}/*:${HADOOP_CONF_DIR}"
else
KYUUBI_BEELINE_CLASSPATH="${KYUUBI_BEELINE_JARS}/*:${HADOOP_CONF_DIR}:${YARN_CONF_DIR}"
fi
if [[ -f ${KYUUBI_CONF_DIR}/log4j2-repl.xml ]]; then
KYUUBI_CTL_JAVA_OPTS="${KYUUBI_CTL_JAVA_OPTS} -Dlog4j2.configurationFile=log4j2-repl.xml"
fi
exec ${RUNNER} ${KYUUBI_BEELINE_OPTS} -cp ${KYUUBI_BEELINE_CLASSPATH} $CLASS "$@"

View File

@ -27,17 +27,17 @@ You can add parameters to the URL when establishing a JDBC connection, the param
JDBC URLs have the following format:
```shell
jdbc:hive2://<host>:<port>/<dbName>;<sessionVars>?kyuubi.operation.plan.only.mode=parse/analyze/optimize/optimize_with_stats/physical/execution/none;<kyuubiConfs>#<[spark|hive]Vars>
jdbc:kyuubi://<host>:<port>/<dbName>;<sessionVars>?kyuubi.operation.plan.only.mode=parse/analyze/optimize/optimize_with_stats/physical/execution/none;<kyuubiConfs>#<[spark|hive]Vars>
```
Refer to [hive_jdbc doc](../../jdbc/hive_jdbc.md) for details of others parameters
### Example:
Using beeline tool to connect to the local service, the Shell command is:
Using `kyuubi-beeline` to connect to the local service, the Shell command is:
```shell
beeline -u 'jdbc:hive2://0.0.0.0:10009/default?kyuubi.operation.plan.only.mode=parse' -n {user_name}
kyuubi-beeline -u 'jdbc:kyuubi://0.0.0.0:10009/default?kyuubi.operation.plan.only.mode=parse' -n {user_name}
```
Running the following SQL:
@ -50,7 +50,7 @@ The results are as follows:
```shell
# SQL:
0: jdbc:hive2://0.0.0.0:10009/default> SELECT * FROM t1 LEFT JOIN t2 ON t1.id = t2.id;
0: jdbc:kyuubi://0.0.0.0:10009/default> SELECT * FROM t1 LEFT JOIN t2 ON t1.id = t2.id;
#Result:
+----------------------------------------------------+
@ -63,7 +63,7 @@ The results are as follows:
|
+----------------------------------------------------+
1 row selected (3.008 seconds)
0: jdbc:hive2://0.0.0.0:10009/default>
0: jdbc:kyuubi://0.0.0.0:10009/default>
```
## Session-level parameter setting
@ -71,7 +71,7 @@ The results are as follows:
You can also set the kyuubi.operation.plan.only.mode parameter by executing the set command after the connection has been established
```shell
beeline -u 'jdbc:hive2://0.0.0.0:10009/default' -n {user_name}
kyuubi-beeline -u 'jdbc:kyuubi://0.0.0.0:10009/default' -n {user_name}
```
Running the following SQL:
@ -85,7 +85,7 @@ The results are as follows:
```shell
#set command:
0: jdbc:hive2://0.0.0.0:10009/default> set kyuubi.operation.plan.only.mode=parse;
0: jdbc:kyuubi://0.0.0.0:10009/default> set kyuubi.operation.plan.only.mode=parse;
#set command result:
+----------------------------------+--------+
@ -96,7 +96,7 @@ The results are as follows:
1 row selected (0.568 seconds)
#execute SQL:
0: jdbc:hive2://0.0.0.0:10009/default> SELECT * FROM t1 LEFT JOIN t2 ON t1.id = t2.id;
0: jdbc:kyuubi://0.0.0.0:10009/default> SELECT * FROM t1 LEFT JOIN t2 ON t1.id = t2.id;
# SQL result:
+----------------------------------------------------+
@ -109,6 +109,6 @@ The results are as follows:
|
+----------------------------------------------------+
1 row selected (0.404 seconds)
0: jdbc:hive2://0.0.0.0:10009/default>
0: jdbc:kyuubi://0.0.0.0:10009/default>
```

View File

@ -229,13 +229,13 @@ The last step is to connect to Kyuubi with the right JDBC URL.
The JDBC URL should be in format:
```
jdbc:hive2://<kyuubi_server_address>:<kyuubi_server_port>/<db>;principal=<kyuubi_server_principal>
jdbc:kyuubi://<kyuubi_server_address>:<kyuubi_server_port>/<db>;principal=<kyuubi_server_principal>
```
or
```
jdbc:hive2://<kyuubi_server_address>:<kyuubi_server_port>/<db>;kyuubiServerPrincipal=<kyuubi_server_principal>
jdbc:kyuubi://<kyuubi_server_address>:<kyuubi_server_port>/<db>;kyuubiServerPrincipal=<kyuubi_server_principal>
```
**Note**:

View File

@ -190,7 +190,7 @@ Check kyuubi log, in order to check kyuubi start status and find the jdbc connec
.. code-block:: log
2021-11-26 17:49:50.235 INFO service.ThriftFrontendService: Starting and exposing JDBC connection at: jdbc:hive2://HOST:10009/
2021-11-26 17:49:50.235 INFO service.ThriftFrontendService: Starting and exposing JDBC connection at: jdbc:kyuubi://HOST:10009/
2021-11-26 17:49:50.265 INFO client.ServiceDiscovery: Created a /kyuubi/serviceUri=host:10009;version=1.3.1-incubating;sequence=0000000037 on ZooKeeper for KyuubiServer uri: host:10009
2021-11-26 17:49:50.267 INFO server.KyuubiServer: Service[KyuubiServer] is started.
@ -199,11 +199,11 @@ You can get the jdbc connection url by the log above.
Test The Connectivity Of Kyuubi And Delta Lake
**********************************************
Use ``$KYUUBI_HOME/bin/beeline`` tool,
Use ``$KYUUBI_HOME/bin/kyuubi-beeline`` tool,
.. code-block:: shell
./bin//beeline -u 'jdbc:hive2://<YOUR_HOST>:10009/'
./bin/kyuubi-beeline -u 'jdbc:kyuubi://<YOUR_HOST>:10009/'
At the same time, you can also check whether the engine is running on the spark UI:

View File

@ -50,7 +50,7 @@ Now, you can start Kyuubi server with this kudu embedded Spark distribution.
#### Start Beeline Or Other Client You Prefer
```shell
bin/beeline -u 'jdbc:hive2://<host>:<port>/;principal=<if kerberized>;#spark.yarn.queue=kyuubi_test'
bin/kyuubi-beeline -u 'jdbc:kyuubi://<host>:<port>/;principal=<if kerberized>;#spark.yarn.queue=kyuubi_test'
```
#### Register Kudu table as Spark Temporary view
@ -64,7 +64,7 @@ options (
```
```sql
0: jdbc:hive2://spark5.jd.163.org:10009/> show tables;
0: jdbc:kyuubi://spark5.jd.163.org:10009/> show tables;
19/07/09 15:28:03 INFO ExecuteStatementInClientMode: Running query 'show tables' with 1104328b-515c-4f8b-8a68-1c0b202bc9ed
19/07/09 15:28:03 INFO KyuubiSparkUtil$: Application application_1560304876299_3805060 has been activated
19/07/09 15:28:03 INFO ExecuteStatementInClientMode: Executing query in incremental mode, running 1 jobs before optimization
@ -82,7 +82,7 @@ options (
#### Query Kudu Table
```sql
0: jdbc:hive2://spark5.jd.163.org:10009/> select * from kudutest;
0: jdbc:kyuubi://spark5.jd.163.org:10009/> select * from kudutest;
19/07/09 15:25:17 INFO ExecuteStatementInClientMode: Running query 'select * from kudutest' with ac3e8553-0d79-4c57-add1-7d3ffe34ba16
19/07/09 15:25:17 INFO KyuubiSparkUtil$: Application application_1560304876299_3805060 has been activated
19/07/09 15:25:17 INFO ExecuteStatementInClientMode: Executing query in incremental mode, running 3 jobs before optimization
@ -103,7 +103,7 @@ options (
#### Join Kudu table with Hive table
```sql
0: jdbc:hive2://spark5.jd.163.org:10009/> select t1.*, t2.* from hive_tbl t1 join kudutest t2 on t1.userid=t2.userid+1;
0: jdbc:kyuubi://spark5.jd.163.org:10009/> select t1.*, t2.* from hive_tbl t1 join kudutest t2 on t1.userid=t2.userid+1;
19/07/09 15:31:01 INFO ExecuteStatementInClientMode: Running query 'select t1.*, t2.* from hive_tbl t1 join kudutest t2 on t1.userid=t2.userid+1' with 6982fa5c-29fa-49be-a5bf-54c935bbad18
19/07/09 15:31:01 INFO KyuubiSparkUtil$: Application application_1560304876299_3805060 has been activated
<omitted lines.... >
@ -123,7 +123,7 @@ options (
You should notice that only `INSERT INTO` is supported by Kudu, `OVERWRITE` data is not supported
```sql
0: jdbc:hive2://spark5.jd.163.org:10009/> insert overwrite table kudutest select * from hive_tbl;
0: jdbc:kyuubi://spark5.jd.163.org:10009/> insert overwrite table kudutest select * from hive_tbl;
19/07/09 15:35:29 INFO ExecuteStatementInClientMode: Running query 'insert overwrite table kudutest select * from hive_tbl' with 1afdb791-1aa7-4ceb-8ba8-ff53c17615d1
19/07/09 15:35:29 INFO KyuubiSparkUtil$: Application application_1560304876299_3805060 has been activated
19/07/09 15:35:30 ERROR ExecuteStatementInClientMode:
@ -163,7 +163,7 @@ java.lang.UnsupportedOperationException: overwrite is not yet supported
```
```sql
0: jdbc:hive2://spark5.jd.163.org:10009/> insert into table kudutest select * from hive_tbl;
0: jdbc:kyuubi://spark5.jd.163.org:10009/> insert into table kudutest select * from hive_tbl;
19/07/09 15:36:26 INFO ExecuteStatementInClientMode: Running query 'insert into table kudutest select * from hive_tbl' with f7460400-0564-4f98-93b6-ad76e579e7af
19/07/09 15:36:26 INFO KyuubiSparkUtil$: Application application_1560304876299_3805060 has been activated
<omitted lines ...>

View File

@ -61,7 +61,7 @@ These properties are defined by Spark and Kyuubi will pass them to `spark-submit
**Note:** None of these would take effect if the application for a particular user already exists.
- Specify it in the JDBC connection URL, e.g. `jdbc:hive2://localhost:10009/;#spark.master=yarn;spark.yarn.queue=thequeue`
- Specify it in the JDBC connection URL, e.g. `jdbc:kyuubi://localhost:10009/;#spark.master=yarn;spark.yarn.queue=thequeue`
- Specify it in `$KYUUBI_HOME/conf/kyuubi-defaults.conf`
- Specify it in `$SPARK_HOME/conf/spark-defaults.conf`

View File

@ -71,7 +71,7 @@ With [Kyuubi Hive JDBC Driver](https://mvnrepository.com/artifact/org.apache.kyu
For example,
```shell
bin/beeline -u 'jdbc:hive2://10.242.189.214:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi' -n kentyao
bin/kyuubi-beeline -u 'jdbc:kyuubi://10.242.189.214:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi' -n kentyao
```
## How to Hot Upgrade Kyuubi Server

View File

@ -41,20 +41,19 @@ By default, Kyuubi launches Spark SQL engines pointing to a dummy embedded [Apac
and this metadata can only be seen by one user at a time, e.g.
```shell script
bin/beeline -u 'jdbc:hive2://localhost:10009/' -n kentyao
Connecting to jdbc:hive2://localhost:10009/
Connected to: Spark SQL (version 1.0.0-SNAPSHOT)
Driver: Hive JDBC (version 2.3.7)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 2.3.7 by Apache Hive
0: jdbc:hive2://localhost:10009/> show databases;
bin/kyuubi-beeline -u 'jdbc:kyuubi://localhost:10009/' -n kentyao
Connecting to jdbc:kyuubi://localhost:10009/
Connected to: Spark SQL (version 3.4.2)
Driver: Kyuubi Project Hive JDBC Client (version 1.9.0)
Beeline version 1.9.0 by Apache Kyuubi
0: jdbc:kyuubi://localhost:10009/> show databases;
2020-11-16 23:50:50.388 INFO operation.ExecuteStatement:
Spark application name: kyuubi_kentyao_spark_2020-11-16T15:50:08.968Z
application ID: local-1605541809797
application web UI: http://192.168.1.14:60165
master: local[*]
deploy mode: client
version: 3.0.1
version: 3.4.2
Start time: 2020-11-16T15:50:09.123Z
User: kentyao
2020-11-16 23:50:50.404 INFO metastore.HiveMetaStore: 2: get_databases: *
@ -66,14 +65,14 @@ Beeline version 2.3.7 by Apache Hive
| default |
+------------+
1 row selected (0.122 seconds)
0: jdbc:hive2://localhost:10009/> show tables;
0: jdbc:kyuubi://localhost:10009/> show tables;
2020-11-16 23:50:52.957 INFO operation.ExecuteStatement:
Spark application name: kyuubi_kentyao_spark_2020-11-16T15:50:08.968Z
application ID: local-1605541809797
application web UI: http://192.168.1.14:60165
master: local[*]
deploy mode: client
version: 3.0.1
version: 3.4.2
Start time: 2020-11-16T15:50:09.123Z
User: kentyao
2020-11-16 23:50:52.968 INFO metastore.HiveMetaStore: 2: get_database: default
@ -139,7 +138,7 @@ This version of configuration has lower priority than those in `$KYUUBI_HOME/con
We can pass _**Hive primitives**_ or **_Spark derivatives_** directly in the JDBC connection URL, e.g.
```
jdbc:hive2://localhost:10009/;#hive.metastore.uris=thrift://localhost:9083
jdbc:kyuubi://localhost:10009/;#hive.metastore.uris=thrift://localhost:9083
```
This will override the defaults in `$SPARK_HOME/conf/hive-site.xml` and `$KYUUBI_HOME/conf/kyuubi-defaults.conf` for each _**user account**_.

View File

@ -100,13 +100,13 @@ You should connect like:
```shell
kubectl exec -it kyuubi-example -- /bin/bash
${KYUUBI_HOME}/bin/beeline -u 'jdbc:kyuubi://localhost:10009'
${KYUUBI_HOME}/bin/kyuubi-beeline -u 'jdbc:kyuubi://localhost:10009'
```
Or you can submit tasks directly through local beeline:
Or you can submit tasks directly through kyuubi-beeline:
```shell
${KYUUBI_HOME}/bin/beeline -u 'jdbc:kyuubi://${hostname}:${port}'
${KYUUBI_HOME}/bin/kyuubi-beeline -u 'jdbc:kyuubi://${hostname}:${port}'
```
As using service nodePort, port means nodePort and hostname means any hostname of kubernetes node.

View File

@ -17,6 +17,10 @@
# Kyuubi Migration Guide
## Upgrading from Kyuubi 1.9 to 1.10
* Since Kyuubi 1.10, `beeline` is deprecated and will be removed in the future, please use `kyuubi-beeline` instead.
## Upgrading from Kyuubi 1.8 to 1.9
* Since Kyuubi 1.9.0, `kyuubi.session.conf.advisor` can be set as a sequence, Kyuubi supported chaining SessionConfAdvisors.

View File

@ -47,12 +47,12 @@ As above explains, the incremental collection mode is not suitable for common qu
collection mode for specific queries by using
```
beeline -u 'jdbc:hive2://kyuubi:10009/?spark.driver.maxResultSize=8g;spark.driver.memory=12g#kyuubi.engine.share.level=CONNECTION;kyuubi.operation.incremental.collect=true' \
kyuubi-beeline -u 'jdbc:kyuubi://kyuubi:10009/?spark.driver.maxResultSize=8g;spark.driver.memory=12g#kyuubi.engine.share.level=CONNECTION;kyuubi.operation.incremental.collect=true' \
--incremental=true \
-f big_result_query.sql
```
`--incremental=true` is required for beeline client, otherwise, the entire result sets is fetched and buffered before
`--incremental=true` is required for kyuubi-beeline client, otherwise, the entire result sets is fetched and buffered before
being displayed, which may cause client side OOM.
## Change incremental collection mode in session
@ -60,10 +60,10 @@ being displayed, which may cause client side OOM.
The configuration `kyuubi.operation.incremental.collect` can also be changed using `SET` in session.
```
~ beeline -u 'jdbc:hive2://localhost:10009'
Connected to: Apache Kyuubi (Incubating) (version 1.5.0-SNAPSHOT)
~ kyuubi-beeline -u 'jdbc:kyuubi://localhost:10009'
Connected to: Apache Kyuubi (version 1.9.0)
0: jdbc:hive2://localhost:10009/> set kyuubi.operation.incremental.collect=true;
0: jdbc:kyuubi://localhost:10009/> set kyuubi.operation.incremental.collect=true;
+---------------------------------------+--------+
| key | value |
+---------------------------------------+--------+
@ -71,7 +71,7 @@ Connected to: Apache Kyuubi (Incubating) (version 1.5.0-SNAPSHOT)
+---------------------------------------+--------+
1 row selected (0.039 seconds)
0: jdbc:hive2://localhost:10009/> select /*+ REPARTITION(5) */ * from range(1, 10);
0: jdbc:kyuubi://localhost:10009/> select /*+ REPARTITION(5) */ * from range(1, 10);
+-----+
| id |
+-----+
@ -88,7 +88,7 @@ Connected to: Apache Kyuubi (Incubating) (version 1.5.0-SNAPSHOT)
+-----+
10 rows selected (1.929 seconds)
0: jdbc:hive2://localhost:10009/> set kyuubi.operation.incremental.collect=false;
0: jdbc:kyuubi://localhost:10009/> set kyuubi.operation.incremental.collect=false;
+---------------------------------------+--------+
| key | value |
+---------------------------------------+--------+
@ -96,7 +96,7 @@ Connected to: Apache Kyuubi (Incubating) (version 1.5.0-SNAPSHOT)
+---------------------------------------+--------+
1 row selected (0.027 seconds)
0: jdbc:hive2://localhost:10009/> select /*+ REPARTITION(5) */ * from range(1, 10);
0: jdbc:kyuubi://localhost:10009/> select /*+ REPARTITION(5) */ * from range(1, 10);
+-----+
| id |
+-----+

View File

@ -68,6 +68,6 @@ If a user `uly` creates a connection with:
.. code-block:: java
jdbc:hive2://localhost:10009/;hive.server2.proxy.user=uly;#spark.driver.memory=2G
jdbc:kyuubi://localhost:10009/;hive.server2.proxy.user=uly;#spark.driver.memory=2G
The final Spark application will allocate ``1G`` rather than ``2G`` for the driver jvm.

View File

@ -176,16 +176,16 @@ Operation log will show how SQL queries are executed, such as query planning, ex
Operation logs can reveal directly to end-users how their queries are being executed on the server/engine-side, including some process-oriented information, and why their queries are slow or in error.
For example, when you, as an end-user, use `beeline` to connect a Kyuubi server and execute query like below.
For example, when you, as an end-user, use `kyuubi-beeline` to connect a Kyuubi server and execute query like below.
```shell
bin/beeline -u 'jdbc:hive2://10.242.189.214:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi' -n kent -e 'select * from src;'
kyuubi-beeline -u 'jdbc:kyuubi://10.242.189.214:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi' -n kent -e 'select * from src;'
```
You will both get the final results and the corresponding operation logs telling you the journey of the query.
```log
0: jdbc:hive2://10.242.189.214:2181/> select * from src;
0: jdbc:kyuubi://10.242.189.214:2181/> select * from src;
2021-10-27 17:00:19.399 INFO operation.ExecuteStatement: Processing kent's query[fb5f57d2-2b50-4a46-961b-3a5c6a2d2597]: INITIALIZED_STATE -> PENDING_STATE, statement: select * from src
2021-10-27 17:00:19.401 INFO operation.ExecuteStatement: Processing kent's query[fb5f57d2-2b50-4a46-961b-3a5c6a2d2597]: PENDING_STATE -> RUNNING_STATE, statement: select * from src
2021-10-27 17:00:19.400 INFO operation.ExecuteStatement: Processing kent's query[26e169a2-6c06-450a-b758-e577ac673d70]: INITIALIZED_STATE -> PENDING_STATE, statement: select * from src

View File

@ -116,7 +116,7 @@ With these features, Kyuubi provides a two-level elastic resource management arc
For example,
```shell
./beeline -u "jdbc:hive2://kyuubi.org:10009/;\
bin/kyuubi-beeline -u "jdbc:kyuubi://kyuubi.org:10009/;\
hive.server2.proxy.user=tom#\
spark.yarn.queue=thequeue;\
spark.dynamicAllocation.enabled=true;\

View File

@ -42,7 +42,7 @@ pre-installed and the ``JAVA_HOME`` is correctly set to each component.
**Java** JRE 8/11/17 Officially released against JDK8
**Kyuubi** Gateway \ |release| \ - Kyuubi Server
Engine lib - Kyuubi Engine
Beeline - Kyuubi Hive Beeline
Beeline - Kyuubi Beeline
**Spark** Engine 3.1 to 3.5 A Spark distribution
**Flink** Engine 1.16 to 1.19 A Flink distribution
**Trino** Engine N/A A Trino cluster allows to access via trino-client v411
@ -187,7 +187,7 @@ And you are able to get the JDBC connection URL from the log file -
For example,
Starting and exposing JDBC connection at: jdbc:hive2://localhost:10009/
Starting and exposing JDBC connection at: jdbc:kyuubi://localhost:10009/
If something goes wrong, you shall be able to find some clues in the log file too.
@ -206,7 +206,7 @@ If something goes wrong, you shall be able to find some clues in the log file to
Operate Clients
---------------
Kyuubi delivers a beeline client, enabling a similar experience to Apache Hive use cases.
Kyuubi delivers a kyuubi-beeline client, enabling a similar experience to Apache Hive use cases.
Open Connections
~~~~~~~~~~~~~~~~
@ -216,21 +216,21 @@ for the following JDBC URL. The case below open a session for user named `apache
.. code-block::
$ bin/beeline -u 'jdbc:hive2://localhost:10009/' -n apache
$ bin/kyuubi-beeline -u 'jdbc:kyuubi://localhost:10009/' -n apache
.. note::
:class: toggle
Use `--help` to display the usage guide for the beeline tool.
Use `--help` to display the usage guide for the kyuubi-beeline tool.
.. code-block::
$ bin/beeline --help
$ bin/kyuubi-beeline --help
Execute Statements
~~~~~~~~~~~~~~~~~~
After successfully connected with the server, you can run sql queries in the beeline
After successfully connected with the server, you can run sql queries in the kyuubi-beeline
console. For instance,
.. code-block::
@ -238,7 +238,7 @@ console. For instance,
> SHOW DATABASES;
You will see a wall of operation logs, and a result table in the beeline console.
You will see a wall of operation logs, and a result table in the kyuubi-beeline console.
.. code-block::
@ -264,19 +264,19 @@ started.
.. code-block::
$ bin/beeline -u 'jdbc:hive2://localhost:10009/' -n kentyao
$ bin/kyuubi-beeline -u 'jdbc:kyuubi://localhost:10009/' -n kentyao
This may change depending on the `engine share level`_ you set.
Close Connections
~~~~~~~~~~~~~~~~~
Close the session between beeline and Kyuubi server by executing `!quit`, for example,
Close the session between kyuubi-beeline and Kyuubi server by executing `!quit`, for example,
.. code-block::
> !quit
Closing: 0: jdbc:hive2://localhost:10009/
Closing: 0: jdbc:kyuubi://localhost:10009/
Stop Engines
~~~~~~~~~~~~
@ -289,7 +289,7 @@ mean terminations of engines. It depends on both the `engine share level`_ and
Stop Kyuubi
-----------
Stop Kyuubi by running the following in the `$KYUUBI_HOME` directory:
Stop Kyuubi which running at the background by performing the following in the `$KYUUBI_HOME` directory:
.. code-block::