Go to file
dupeng 37bf244236
[KYUUBI #6251] Improve kyuubi-beeline help message
# 🔍 Description
## Issue References 🔗

This pull request fixes #6251

## Describe Your Solution 🔧
As `kyuubi-beeline` is derived from Hive `beeline`, I haven't made extensive modifications to its help message, only adjusting some formatting to enhance its appearance. Below are the specific changes:
1. Replace `jdbc:hive2//` with `jdbc:kyuubi//`.
2. Replace `beeline` with `kyuubi-beeline`.
3. Capitalize the first letter.
4. If the comment for a parameter spans multiple lines, add an additional `\n` at the end of the comment to separate it from the following line.
5. Append the help information for `--python-mode` to the front of `--help`.
6. Add some examples.
7. Improved letter indentation.

Here is the whole help message:
```
$ ./bin/kyuubi-beeline --help
Usage: kyuubi-beeline <options>.

Options:
  -u <database url>               The JDBC URL to connect to.
  -c <named url>                  The named JDBC URL to connect to,
                                  which should be present in beeline-site.xml
                                  as the value of beeline.kyuubi.jdbc.url.<namedUrl>.

  -r                              Reconnect to last saved connect url (in conjunction with !save).
  -n <username>                   The username to connect as.
  -p <password>                   The password to connect as.
  -d <driver class>               The driver class to use.
  -i <init file>                  Script file for initialization.
  -e <query>                      Query that should be executed.
  -f <exec file>                  Script file that should be executed.
  -w, --password-file <file>      The password file to read password from.
  --hiveconf property=value       Use value for given property.
  --hivevar name=value            Hive variable name and value.
                                  This is Hive specific settings in which variables
                                  can be set at session level and referenced in Hive
                                  commands or queries.

  --property-file=<property file> The file to read connection properties (url, driver, user, password) from.
  --color=[true|false]            Control whether color is used for display.
  --showHeader=[true|false]       Show column names in query results.
  --escapeCRLF=[true|false]       Show carriage return and line feeds in query results as escaped \r and \n.
  --headerInterval=ROWS;          The interval between which heades are displayed.
  --fastConnect=[true|false]      Skip building table/column list for tab-completion.
  --autoCommit=[true|false]       Enable/disable automatic transaction commit.
  --verbose=[true|false]          Show verbose error messages and debug info.
  --showWarnings=[true|false]     Display connection warnings.
  --showDbInPrompt=[true|false]   Display the current database name in the prompt.
  --showNestedErrs=[true|false]   Display nested errors.
  --numberFormat=[pattern]        Format numbers using DecimalFormat pattern.
  --force=[true|false]            Continue running script even after errors.
  --maxWidth=MAXWIDTH             The maximum width of the terminal.
  --maxColumnWidth=MAXCOLWIDTH    The maximum width to use when displaying columns.
  --silent=[true|false]           Be more silent.
  --autosave=[true|false]         Automatically save preferences.
  --outputformat=<format mode>    Format mode for result display.
                                  The available options ars [table|vertical|csv2|tsv2|dsv|csv|tsv|json|jsonfile].
                                  Note that csv, and tsv are deprecated, use csv2, tsv2 instead.

  --incremental=[true|false]      Defaults to false. When set to false, the entire result set
                                  is fetched and buffered before being displayed, yielding optimal
                                  display column sizing. When set to true, result rows are displayed
                                  immediately as they are fetched, yielding lower latency and
                                  memory usage at the price of extra display column padding.
                                  Setting --incremental=true is recommended if you encounter an OutOfMemory
                                  on the client side (due to the fetched result set size being large).
                                  Only applicable if --outputformat=table.

  --incrementalBufferRows=NUMROWS The number of rows to buffer when printing rows on stdout,
                                  defaults to 1000; only applicable if --incremental=true
                                  and --outputformat=table.

  --truncateTable=[true|false]    Truncate table column when it exceeds length.
  --delimiterForDSV=DELIMITER     Specify the delimiter for delimiter-separated values output format (default: |).
  --isolation=LEVEL               Set the transaction isolation level.
  --nullemptystring=[true|false]  Set to true to get historic behavior of printing null as empty string.
  --maxHistoryRows=MAXHISTORYROWS The maximum number of rows to store beeline history.
  --delimiter=DELIMITER           Set the query delimiter; multi-char delimiters are allowed, but quotation
                                  marks, slashes, and -- are not allowed (default: ;).

  --convertBinaryArrayToString=[true|false]
                                  Display binary column data as string or as byte array.

  --python-mode                   Execute python code/script.
  -h, --help                      Display this message.

Examples:
  1. Connect using simple authentication to Kyuubi Server on localhost:10009.
  $ kyuubi-beeline -u jdbc:kyuubi://localhost:10009 -n username

  2. Connect using simple authentication to Kyuubi Server on kyuubi.local:10009 using -n for username and -p for password.
  $ kyuubi-beeline -n username -p password -u jdbc:kyuubi://kyuubi.local:10009

  3. Connect using Kerberos authentication with kyuubi/localhostmydomain.com as Kyuubi Server principal(kinit is required before connection).
  $ kyuubi-beeline -u "jdbc:kyuubi://kyuubi.local:10009/default;kyuubiServerPrincipal=kyuubi/localhostmydomain.com"

  4. Connect using Kerberos authentication using principal and keytab directly.
  $ kyuubi-beeline -u "jdbc:kyuubi://kyuubi.local:10009/default;kyuubiClientPrincipal=usermydomain.com;kyuubiClientKeytab=/local/path/client.keytab;kyuubiServerPrincipal=kyuubi/localhostmydomain.com"

  5. Connect using SSL connection to Kyuubi Server on localhost:10009.
  $ kyuubi-beeline -u "jdbc:kyuubi://localhost:10009/default;ssl=true;sslTrustStore=/usr/local/truststore;trustStorePassword=mytruststorepassword"

  6. Connect using LDAP authentication.
  $ kyuubi-beeline -u jdbc:kyuubi://kyuubi.local:10009/default -n ldap-username -p ldap-password

  7. Connect using the ZooKeeper address to Kyuubi HA cluster.
  $ kyuubi-beeline -u "jdbc:kyuubi://zk1:2181,zk2:2181,zk3:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi" -n username
```
If you have any suggested revisions, I will promptly respond and make the necessary adjustments.

## Types of changes 🔖

- [x] Bugfix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to change)

## Test Plan 🧪

#### Behavior Without This Pull Request ⚰️

#### Behavior With This Pull Request 🎉

#### Related Unit Tests

---

# Checklist 📝

- [ ] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html)

**Be nice. Be informative.**

Closes #6300 from dupen01/issue-6251.

Closes #6251

69f9b3dc9 [dupeng] deleted usage() from KyuubiBeeLine.java;updated some examples
463bcfb85 [dupeng] update --delimiter description
95e13c0b6 [dupeng] update some example message
6625f99b5 [dupeng] update cmd-usage
6570ec255 [dupeng] update cmd-usage
59ea1a495 [dupeng] add kyuubi-beeline help message

Authored-by: dupeng <dunett@163.com>
Signed-off-by: Cheng Pan <chengpan@apache.org>
2024-04-12 19:21:08 +08:00
.github [KYUUBI #6180][FOLLOWUP] Update the setup JDK name in nightly.yml 2024-04-09 19:38:03 +08:00
.idea [KYUUBI #5252] [MINOR] Remove incubator link 2023-09-05 14:01:40 +08:00
bin [KYUUBI #6239] Rename beeline to kyuubi-beeline 2024-04-03 18:35:38 +08:00
build [KYUUBI #6193] [INFRA] Add known_transkations 2024-03-18 21:53:59 +08:00
charts/kyuubi [KYUUBI #6195] Update Helm Chart and playground to use 1.9.0 2024-03-19 11:17:27 +08:00
conf [KYUUBI #6268] Specify logDir for RollingFile filePattern 2024-04-09 17:27:56 +08:00
dev [KYUUBI #6271] Upgrade kafka-clients from 3.5.1 to 3.5.2 2024-04-07 23:51:53 +08:00
docker [KYUUBI #6195] Update Helm Chart and playground to use 1.9.0 2024-03-19 11:17:27 +08:00
docs [KYUUBI #6250] Drop support for Spark 3.1 2024-04-08 19:52:27 +08:00
extensions [KYUUBI #6212] Added audit handler shutdown to the shutdown hook 2024-04-08 10:40:04 +08:00
externals [KYUUBI #6290] Add custom exception serialization for SparkOperationEvent 2024-04-11 19:29:41 +08:00
integration-tests [KYUUBI #5374] JDBC Engine supports ClickHouse 2024-04-02 20:00:01 +08:00
kyuubi-assembly [RELEASE] Bump 1.10.0-SNAPSHOT 2024-03-13 14:24:49 +08:00
kyuubi-common [KYUUBI #6252] Upgrade hive-service-rpc 4.0.0 2024-04-10 15:01:22 +08:00
kyuubi-ctl [KYUUBI #6216] Support to deny some client ips to make connection 2024-04-07 16:32:00 +08:00
kyuubi-events [KYUUBI #6290] Add custom exception serialization for SparkOperationEvent 2024-04-11 19:29:41 +08:00
kyuubi-ha [RELEASE] Bump 1.10.0-SNAPSHOT 2024-03-13 14:24:49 +08:00
kyuubi-hive-beeline [KYUUBI #6251] Improve kyuubi-beeline help message 2024-04-12 19:21:08 +08:00
kyuubi-hive-jdbc [KYUUBI #6221] Fix parameter replacement issue caused by incorrect sql split 2024-03-29 10:33:04 +08:00
kyuubi-hive-jdbc-shaded [RELEASE] Bump 1.10.0-SNAPSHOT 2024-03-13 14:24:49 +08:00
kyuubi-metrics [RELEASE] Bump 1.10.0-SNAPSHOT 2024-03-13 14:24:49 +08:00
kyuubi-rest-client [KYUUBI #6216] Support to deny some client ips to make connection 2024-04-07 16:32:00 +08:00
kyuubi-server [KYUUBI #6079] Web UI support Basic authN 2024-04-11 19:26:45 +08:00
kyuubi-util [RELEASE] Bump 1.10.0-SNAPSHOT 2024-03-13 14:24:49 +08:00
kyuubi-util-scala [RELEASE] Bump 1.10.0-SNAPSHOT 2024-03-13 14:24:49 +08:00
kyuubi-zookeeper [RELEASE] Bump 1.10.0-SNAPSHOT 2024-03-13 14:24:49 +08:00
licenses
licenses-binary [KYUUBI #5674][LICENSE][FOLLOWUP] Update license files 2024-01-30 13:47:33 +08:00
python [KYUUBI #5686][FOLLOWUP] Rename pyhive to python 2024-04-09 20:30:02 +08:00
.asf.yaml [KYUUBI #5342] Add label hacktoberfest to project 2023-09-28 12:13:16 +08:00
.dockerignore
.gitattributes [KYUUBI #5335] Set markdown file EOL 2023-09-27 22:02:09 +08:00
.gitignore [KYUUBI #6071] Add .java-version into git ignore 2024-02-22 06:09:18 +00:00
.rat-excludes [KYUUBI #5686][FOLLOWUP] Rename pyhive to python 2024-04-09 20:30:02 +08:00
.readthedocs.yaml
.scalafmt.conf [KYUUBI #5007] Bump Scalafmt from 3.7.4 to 3.7.5 2023-06-30 11:34:36 +08:00
codecov.yml [KYUUBI #5501] Update codecov token and fix codecov reporting on PRs 2023-10-26 14:57:36 +08:00
CONTRIBUTING.md [KYUUBI #6068] Remove community section from user docs 2024-02-21 05:20:42 +00:00
LICENSE [KYUUBI #5484] Remove legacy Web UI 2023-10-25 13:36:00 +08:00
LICENSE-binary [KYUUBI #4453][FOLLOW] Delete LICENSE for removed kubernetes client dependencies 2024-03-11 17:10:44 +08:00
NOTICE [KYUUBI #5953] [LICENSE] Update NOTICE 2024-01-10 19:29:01 +08:00
NOTICE-binary [KYUUBI #6162] Cut out hive-common deps in beeline module 2024-03-12 11:23:10 +08:00
pom.xml [KYUUBI #6252] Upgrade hive-service-rpc 4.0.0 2024-04-10 15:01:22 +08:00
README.md [KYUUBI #5432] Fix typo in README.md 2023-10-16 22:03:57 +08:00
scalastyle-config.xml

Kyuubi logo

Project - Documentation - Who's using

Apache Kyuubi

Apache Kyuubi™ is a distributed and multi-tenant gateway to provide serverless SQL on data warehouses and lakehouses.

What is Kyuubi?

Kyuubi provides a pure SQL gateway through Thrift JDBC/ODBC interface for end-users to manipulate large-scale data with pre-programmed and extensible Spark SQL engines. This "out-of-the-box" model minimizes the barriers and costs for end-users to use Spark at the client side. At the server-side, Kyuubi server and engines' multi-tenant architecture provides the administrators a way to achieve computing resource isolation, data security, high availability, high client concurrency, etc.

  • A HiveServer2-like API
  • Multi-tenant Spark Support
  • Running Spark in a serverless way

Target Users

Kyuubi's goal is to make it easy and efficient for anyone to use Spark(maybe other engines soon) and facilitate users to handle big data like ordinary data. Here, anyone means that users do not need to have a Spark technical background but a human language, SQL only. Sometimes, SQL skills are unnecessary when integrating Kyuubi with Apache Superset, which supports rich visualizations and dashboards.

In typical big data production environments with Kyuubi, there should be system administrators and end-users.

  • System administrators: A small group consists of Spark experts responsible for Kyuubi deployment, configuration, and tuning.
  • End-users: Focus on business data of their own, not where it stores, how it computes.

Additionally, the Kyuubi community will continuously optimize the whole system with various features, such as History-Based Optimizer, Auto-tuning, Materialized View, SQL Dialects, Functions, etc.

Usage scenarios

Port workloads from HiveServer2 to Spark SQL

In typical big data production environments, especially secured ones, all bundled services manage access control lists to restricting access to authorized users. For example, Hadoop YARN divides compute resources into queues. With Queue ACLs, it can identify and control which users/groups can take actions on particular queues. Similarly, HDFS ACLs control access of HDFS files by providing a way to set different permissions for specific users/groups.

Apache Spark is a unified analytics engine for large-scale data processing. It provides a Distributed SQL Engine, a.k.a, the Spark Thrift Server(STS), designed to be seamlessly compatible with HiveServer2 and get even better performance.

HiveServer2 can identify and authenticate a caller, and then if the caller also has permissions for the YARN queue and HDFS files, it succeeds. Otherwise, it fails. However, on the one hand, STS is a single Spark application. The user and queue to which STS belongs are uniquely determined at startup. Consequently, STS cannot leverage cluster managers such as YARN and Kubernetes for resource isolation and sharing or control the access for callers by the single user inside the whole system. On the other hand, the Thrift Server is coupled in the Spark driver's JVM process. This coupled architecture puts a high risk on server stability and makes it unable to handle high client concurrency or apply high availability such as load balancing as it is stateful.

Kyuubi extends the use of STS in a multi-tenant model based on a unified interface and relies on the concept of multi-tenancy to interact with cluster managers to finally gain the ability of resources sharing/isolation and data security. The loosely coupled architecture of the Kyuubi server and engine dramatically improves the client concurrency and service stability of the service itself.

DataLake/Lakehouse Support

The vision of Kyuubi is to unify the portal and become an easy-to-use data lake management platform. Different kinds of workloads, such as ETL processing and BI analytics, can be supported by one platform, using one copy of data, with one SQL interface.

  • Logical View support via Kyuubi DataLake Metadata APIs
  • Multiple Catalogs support
  • SQL Standard Authorization support for DataLake(coming)

Cloud Native Support

Kyuubi can deploy its engines on different kinds of Cluster Managers, such as, Hadoop YARN, Kubernetes, etc.

The Kyuubi Ecosystem(present and future)

The figure below shows our vision for the Kyuubi Ecosystem. Some of them have been realized, some in development, and others would not be possible without your help.

Online Documentation Documentation Status

Quick Start

Ready? Getting Started with Kyuubi.

Contributing

Project & Community Status

Aside

The project took its name from a character of a popular Japanese manga - Naruto. The character is named Kyuubi Kitsune/Kurama, which is a nine-tailed fox in mythology. Kyuubi spread the power and spirit of fire, which is used here to represent the powerful Apache Spark. Its nine tails stand for end-to-end multi-tenancy support of this project.