# 🔍 Description ## Issue References 🔗 This pull request fixes #6251 ## Describe Your Solution 🔧 As `kyuubi-beeline` is derived from Hive `beeline`, I haven't made extensive modifications to its help message, only adjusting some formatting to enhance its appearance. Below are the specific changes: 1. Replace `jdbc:hive2//` with `jdbc:kyuubi//`. 2. Replace `beeline` with `kyuubi-beeline`. 3. Capitalize the first letter. 4. If the comment for a parameter spans multiple lines, add an additional `\n` at the end of the comment to separate it from the following line. 5. Append the help information for `--python-mode` to the front of `--help`. 6. Add some examples. 7. Improved letter indentation. Here is the whole help message: ``` $ ./bin/kyuubi-beeline --help Usage: kyuubi-beeline <options>. Options: -u <database url> The JDBC URL to connect to. -c <named url> The named JDBC URL to connect to, which should be present in beeline-site.xml as the value of beeline.kyuubi.jdbc.url.<namedUrl>. -r Reconnect to last saved connect url (in conjunction with !save). -n <username> The username to connect as. -p <password> The password to connect as. -d <driver class> The driver class to use. -i <init file> Script file for initialization. -e <query> Query that should be executed. -f <exec file> Script file that should be executed. -w, --password-file <file> The password file to read password from. --hiveconf property=value Use value for given property. --hivevar name=value Hive variable name and value. This is Hive specific settings in which variables can be set at session level and referenced in Hive commands or queries. --property-file=<property file> The file to read connection properties (url, driver, user, password) from. --color=[true|false] Control whether color is used for display. --showHeader=[true|false] Show column names in query results. --escapeCRLF=[true|false] Show carriage return and line feeds in query results as escaped \r and \n. --headerInterval=ROWS; The interval between which heades are displayed. --fastConnect=[true|false] Skip building table/column list for tab-completion. --autoCommit=[true|false] Enable/disable automatic transaction commit. --verbose=[true|false] Show verbose error messages and debug info. --showWarnings=[true|false] Display connection warnings. --showDbInPrompt=[true|false] Display the current database name in the prompt. --showNestedErrs=[true|false] Display nested errors. --numberFormat=[pattern] Format numbers using DecimalFormat pattern. --force=[true|false] Continue running script even after errors. --maxWidth=MAXWIDTH The maximum width of the terminal. --maxColumnWidth=MAXCOLWIDTH The maximum width to use when displaying columns. --silent=[true|false] Be more silent. --autosave=[true|false] Automatically save preferences. --outputformat=<format mode> Format mode for result display. The available options ars [table|vertical|csv2|tsv2|dsv|csv|tsv|json|jsonfile]. Note that csv, and tsv are deprecated, use csv2, tsv2 instead. --incremental=[true|false] Defaults to false. When set to false, the entire result set is fetched and buffered before being displayed, yielding optimal display column sizing. When set to true, result rows are displayed immediately as they are fetched, yielding lower latency and memory usage at the price of extra display column padding. Setting --incremental=true is recommended if you encounter an OutOfMemory on the client side (due to the fetched result set size being large). Only applicable if --outputformat=table. --incrementalBufferRows=NUMROWS The number of rows to buffer when printing rows on stdout, defaults to 1000; only applicable if --incremental=true and --outputformat=table. --truncateTable=[true|false] Truncate table column when it exceeds length. --delimiterForDSV=DELIMITER Specify the delimiter for delimiter-separated values output format (default: |). --isolation=LEVEL Set the transaction isolation level. --nullemptystring=[true|false] Set to true to get historic behavior of printing null as empty string. --maxHistoryRows=MAXHISTORYROWS The maximum number of rows to store beeline history. --delimiter=DELIMITER Set the query delimiter; multi-char delimiters are allowed, but quotation marks, slashes, and -- are not allowed (default: ;). --convertBinaryArrayToString=[true|false] Display binary column data as string or as byte array. --python-mode Execute python code/script. -h, --help Display this message. Examples: 1. Connect using simple authentication to Kyuubi Server on localhost:10009. $ kyuubi-beeline -u jdbc:kyuubi://localhost:10009 -n username 2. Connect using simple authentication to Kyuubi Server on kyuubi.local:10009 using -n for username and -p for password. $ kyuubi-beeline -n username -p password -u jdbc:kyuubi://kyuubi.local:10009 3. Connect using Kerberos authentication with kyuubi/localhostmydomain.com as Kyuubi Server principal(kinit is required before connection). $ kyuubi-beeline -u "jdbc:kyuubi://kyuubi.local:10009/default;kyuubiServerPrincipal=kyuubi/localhostmydomain.com" 4. Connect using Kerberos authentication using principal and keytab directly. $ kyuubi-beeline -u "jdbc:kyuubi://kyuubi.local:10009/default;kyuubiClientPrincipal=usermydomain.com;kyuubiClientKeytab=/local/path/client.keytab;kyuubiServerPrincipal=kyuubi/localhostmydomain.com" 5. Connect using SSL connection to Kyuubi Server on localhost:10009. $ kyuubi-beeline -u "jdbc:kyuubi://localhost:10009/default;ssl=true;sslTrustStore=/usr/local/truststore;trustStorePassword=mytruststorepassword" 6. Connect using LDAP authentication. $ kyuubi-beeline -u jdbc:kyuubi://kyuubi.local:10009/default -n ldap-username -p ldap-password 7. Connect using the ZooKeeper address to Kyuubi HA cluster. $ kyuubi-beeline -u "jdbc:kyuubi://zk1:2181,zk2:2181,zk3:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi" -n username ``` If you have any suggested revisions, I will promptly respond and make the necessary adjustments. ## Types of changes 🔖 - [x] Bugfix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 #### Behavior Without This Pull Request ⚰️ #### Behavior With This Pull Request 🎉 #### Related Unit Tests --- # Checklist 📝 - [ ] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6300 from dupen01/issue-6251. Closes #6251 69f9b3dc9 [dupeng] deleted usage() from KyuubiBeeLine.java;updated some examples 463bcfb85 [dupeng] update --delimiter description 95e13c0b6 [dupeng] update some example message 6625f99b5 [dupeng] update cmd-usage 6570ec255 [dupeng] update cmd-usage 59ea1a495 [dupeng] add kyuubi-beeline help message Authored-by: dupeng <dunett@163.com> Signed-off-by: Cheng Pan <chengpan@apache.org> |
||
|---|---|---|
| .github | ||
| .idea | ||
| bin | ||
| build | ||
| charts/kyuubi | ||
| conf | ||
| dev | ||
| docker | ||
| docs | ||
| extensions | ||
| externals | ||
| integration-tests | ||
| kyuubi-assembly | ||
| kyuubi-common | ||
| kyuubi-ctl | ||
| kyuubi-events | ||
| kyuubi-ha | ||
| kyuubi-hive-beeline | ||
| kyuubi-hive-jdbc | ||
| kyuubi-hive-jdbc-shaded | ||
| kyuubi-metrics | ||
| kyuubi-rest-client | ||
| kyuubi-server | ||
| kyuubi-util | ||
| kyuubi-util-scala | ||
| kyuubi-zookeeper | ||
| licenses | ||
| licenses-binary | ||
| python | ||
| .asf.yaml | ||
| .dockerignore | ||
| .gitattributes | ||
| .gitignore | ||
| .rat-excludes | ||
| .readthedocs.yaml | ||
| .scalafmt.conf | ||
| codecov.yml | ||
| CONTRIBUTING.md | ||
| LICENSE | ||
| LICENSE-binary | ||
| NOTICE | ||
| NOTICE-binary | ||
| pom.xml | ||
| README.md | ||
| scalastyle-config.xml | ||
Project - Documentation - Who's using
Apache Kyuubi
Apache Kyuubi™ is a distributed and multi-tenant gateway to provide serverless SQL on data warehouses and lakehouses.
What is Kyuubi?
Kyuubi provides a pure SQL gateway through Thrift JDBC/ODBC interface for end-users to manipulate large-scale data with pre-programmed and extensible Spark SQL engines. This "out-of-the-box" model minimizes the barriers and costs for end-users to use Spark at the client side. At the server-side, Kyuubi server and engines' multi-tenant architecture provides the administrators a way to achieve computing resource isolation, data security, high availability, high client concurrency, etc.
- A HiveServer2-like API
- Multi-tenant Spark Support
- Running Spark in a serverless way
Target Users
Kyuubi's goal is to make it easy and efficient for anyone to use Spark(maybe other engines soon) and facilitate users to handle big data like ordinary data. Here, anyone means that users do not need to have a Spark technical background but a human language, SQL only. Sometimes, SQL skills are unnecessary when integrating Kyuubi with Apache Superset, which supports rich visualizations and dashboards.
In typical big data production environments with Kyuubi, there should be system administrators and end-users.
- System administrators: A small group consists of Spark experts responsible for Kyuubi deployment, configuration, and tuning.
- End-users: Focus on business data of their own, not where it stores, how it computes.
Additionally, the Kyuubi community will continuously optimize the whole system with various features, such as History-Based Optimizer, Auto-tuning, Materialized View, SQL Dialects, Functions, etc.
Usage scenarios
Port workloads from HiveServer2 to Spark SQL
In typical big data production environments, especially secured ones, all bundled services manage access control lists to restricting access to authorized users. For example, Hadoop YARN divides compute resources into queues. With Queue ACLs, it can identify and control which users/groups can take actions on particular queues. Similarly, HDFS ACLs control access of HDFS files by providing a way to set different permissions for specific users/groups.
Apache Spark is a unified analytics engine for large-scale data processing. It provides a Distributed SQL Engine, a.k.a, the Spark Thrift Server(STS), designed to be seamlessly compatible with HiveServer2 and get even better performance.
HiveServer2 can identify and authenticate a caller, and then if the caller also has permissions for the YARN queue and HDFS files, it succeeds. Otherwise, it fails. However, on the one hand, STS is a single Spark application. The user and queue to which STS belongs are uniquely determined at startup. Consequently, STS cannot leverage cluster managers such as YARN and Kubernetes for resource isolation and sharing or control the access for callers by the single user inside the whole system. On the other hand, the Thrift Server is coupled in the Spark driver's JVM process. This coupled architecture puts a high risk on server stability and makes it unable to handle high client concurrency or apply high availability such as load balancing as it is stateful.
Kyuubi extends the use of STS in a multi-tenant model based on a unified interface and relies on the concept of multi-tenancy to interact with cluster managers to finally gain the ability of resources sharing/isolation and data security. The loosely coupled architecture of the Kyuubi server and engine dramatically improves the client concurrency and service stability of the service itself.
DataLake/Lakehouse Support
The vision of Kyuubi is to unify the portal and become an easy-to-use data lake management platform. Different kinds of workloads, such as ETL processing and BI analytics, can be supported by one platform, using one copy of data, with one SQL interface.
- Logical View support via Kyuubi DataLake Metadata APIs
- Multiple Catalogs support
- SQL Standard Authorization support for DataLake(coming)
Cloud Native Support
Kyuubi can deploy its engines on different kinds of Cluster Managers, such as, Hadoop YARN, Kubernetes, etc.
The Kyuubi Ecosystem(present and future)
The figure below shows our vision for the Kyuubi Ecosystem. Some of them have been realized, some in development, and others would not be possible without your help.
Online Documentation
Quick Start
Ready? Getting Started with Kyuubi.
Contributing
Project & Community Status
Aside
The project took its name from a character of a popular Japanese manga - Naruto.
The character is named Kyuubi Kitsune/Kurama, which is a nine-tailed fox in mythology.
Kyuubi spread the power and spirit of fire, which is used here to represent the powerful Apache Spark.
Its nine tails stand for end-to-end multi-tenancy support of this project.


