<!-- Thanks for sending a pull request! Here are some tips for you: 1. If this is your first time, please read our contributor guidelines: https://kyuubi.readthedocs.io/en/latest/community/contributions.html 2. If the PR is related to an issue in https://github.com/apache/incubator-kyuubi/issues, add '[KYUUBI #XXXX]' in your PR title, e.g., '[KYUUBI #XXXX] Your PR title ...'. 3. If the PR is unfinished, add '[WIP]' in your PR title, e.g., '[WIP][KYUUBI #XXXX] Your PR title ...'. --> ### _Why are the changes needed?_ <!-- Please clarify why the changes are needed. For instance, 1. If you add a feature, you can talk about the use case of it. 2. If you fix a bug, you can clarify why it is a bug. --> This patch proposes to migrate from log4j1 to log4j2 in Kyuubi. Refer the spark patch: https://github.com/apache/spark/pull/34895 ### Does this PR introduce any user-facing change? Yes. Users may need to rewrite their log4j properties file for log4j2. As of version 2.4, log4j now supports configuration via properties files. Note that the property syntax is not the same as the syntax used in log4j 1, but during the migration I found the syntax is pretty close so the migration should be straightforward. ### _How was this patch tested?_ Passed all existing tests. Closes #1769 from turboFei/log4j2. Closes #1769 8613779c [Fei Wang] remove log4j dependencies from spark-sql-engine b2fe6dba [Fei Wang] Use String to present default log level 762e2d03 [Fei Wang] only remove org.apache.logging.log4j:log4j-slf4j-impl 8a912086 [Fei Wang] remove dependencies from spark-sql-engine 7e3a4980 [Fei Wang] address commments 051f49f5 [Fei Wang] address comments 85316a0b [Fei Wang] Keep compatible with log4j12 01d1a84e [Fei Wang] for log4j1 b9e17e1b [Fei Wang] refactor e24391ed [Fei Wang] revert log count 38803002 [Fei Wang] add log4j2.properties.template 4f0b22fc [Fei Wang] save 7ce84119 [Fei Wang] modify log level 1ea5ca53 [Fei Wang] add log4j to engine c4a86d4d [Fei Wang] use AbstractFilter 27b08b6a [Fei Wang] remove more 8cc15ae7 [Fei Wang] reformat c13ec29e [Fei Wang] save temporarily 33a38e2e [Fei Wang] exclude log4j12 from spark-sql 9129a64a [Fei Wang] refactor 5362b43d [Fei Wang] make it run at first 7f27f519 [Fei Wang] more 56f4f1ff [Fei Wang] fix logging a74b6d37 [Fei Wang] start appender dea964aa [Fei Wang] fix build erorr at first e20b7500 [Fei Wang] address comments 2ec02b4d [Fei Wang] fix LogDivertAppender dded1290 [Fei Wang] more c63e0008 [Fei Wang] add log4j2.properties Authored-by: Fei Wang <fwang12@ebay.com> Signed-off-by: Fei Wang <fwang12@ebay.com>
68 KiB
Introduction to the Kyuubi Configurations System
Kyuubi provides several ways to configure the system and corresponding engines.
Environments
You can configure the environment variables in $KYUUBI_HOME/conf/kyuubi-env.sh, e.g, JAVA_HOME, then this java runtime will be used both for Kyuubi server instance and the applications it launches. You can also change the variable in the subprocess's env configuration file, e.g.$SPARK_HOME/conf/spark-env.sh to use more specific ENV for SQL engine applications.
#!/usr/bin/env bash
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#
# - JAVA_HOME Java runtime to use. By default use "java" from PATH.
#
#
# - KYUUBI_CONF_DIR Directory containing the Kyuubi configurations to use.
# (Default: $KYUUBI_HOME/conf)
# - KYUUBI_LOG_DIR Directory for Kyuubi server-side logs.
# (Default: $KYUUBI_HOME/logs)
# - KYUUBI_PID_DIR Directory stores the Kyuubi instance pid file.
# (Default: $KYUUBI_HOME/pid)
# - KYUUBI_MAX_LOG_FILES Maximum number of Kyuubi server logs can rotate to.
# (Default: 5)
# - KYUUBI_JAVA_OPTS JVM options for the Kyuubi server itself in the form "-Dx=y".
# (Default: none).
# - KYUUBI_CTL_JAVA_OPTS JVM options for the Kyuubi ctl itself in the form "-Dx=y".
# (Default: none).
# - KYUUBI_BEELINE_OPTS JVM options for the Kyuubi BeeLine in the form "-Dx=Y".
# (Default: none)
# - KYUUBI_NICENESS The scheduling priority for Kyuubi server.
# (Default: 0)
# - KYUUBI_WORK_DIR_ROOT Root directory for launching sql engine applications.
# (Default: $KYUUBI_HOME/work)
# - HADOOP_CONF_DIR Directory containing the Hadoop / YARN configuration to use.
#
# - SPARK_HOME Spark distribution which you would like to use in Kyuubi.
# - SPARK_CONF_DIR Optional directory where the Spark configuration lives.
# (Default: $SPARK_HOME/conf)
#
## Examples ##
# export JAVA_HOME=/usr/jdk64/jdk1.8.0_152
# export SPARK_HOME=/opt/spark
# export HADOOP_CONF_DIR=/usr/ndp/current/mapreduce_client/conf
# export KYUUBI_JAVA_OPTS="-Xmx10g -XX:+UnlockDiagnosticVMOptions -XX:ParGCCardsPerStrideChunk=4096 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSConcurrentMTEnabled -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSClassUnloadingEnabled -XX:+CMSParallelRemarkEnabled -XX:+UseCondCardMark -XX:MaxDirectMemorySize=1024m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=./logs -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -Xloggc:./logs/kyuubi-server-gc-%t.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=5M -XX:NewRatio=3 -XX:MetaspaceSize=512m"
# export KYUUBI_BEELINE_OPTS="-Xmx2g -XX:+UnlockDiagnosticVMOptions -XX:ParGCCardsPerStrideChunk=4096 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSConcurrentMTEnabled -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSClassUnloadingEnabled -XX:+CMSParallelRemarkEnabled -XX:+UseCondCardMark"
For the environment variables that only needed to be transferred into engine side, you can set it with a Kyuubi configuration item formatted kyuubi.engineEnv.VAR_NAME. For example, with kyuubi.engineEnv.SPARK_DRIVER_MEMORY=4g, the environment variable SPARK_DRIVER_MEMORY with value 4g would be transferred into engine side. With kyuubi.engineEnv.SPARK_CONF_DIR=/apache/confs/spark/conf, the value of SPARK_CONF_DIR in engine side is set to /apache/confs/spark/conf.
Kyuubi Configurations
You can configure the Kyuubi properties in $KYUUBI_HOME/conf/kyuubi-defaults.conf. For example:
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
## Kyuubi Configurations
#
# kyuubi.authentication NONE
# kyuubi.frontend.bind.host localhost
# kyuubi.frontend.bind.port 10009
#
# Details in https://kyuubi.apache.org/docs/latest/deployment/settings.html
Authentication
| Key | Default | Meaning | Type | Since |
|---|---|---|---|---|
| kyuubi.authentication | NONE |
A comma separated list of client authentication types.
|
seq |
1.0.0 |
| kyuubi.authentication .custom.class |
<undefined> |
User-defined authentication implementation of org.apache.kyuubi.service.authentication.PasswdAuthenticationProvider |
string |
1.3.0 |
| kyuubi.authentication .ldap.base.dn |
<undefined> |
LDAP base DN. |
string |
1.0.0 |
| kyuubi.authentication .ldap.domain |
<undefined> |
LDAP domain. |
string |
1.0.0 |
| kyuubi.authentication .ldap.guidKey |
uid |
LDAP attribute name whose values are unique in this LDAP server.For example:uid or cn. |
string |
1.2.0 |
| kyuubi.authentication .ldap.url |
<undefined> |
SPACE character separated LDAP connection URL(s). |
string |
1.0.0 |
| kyuubi.authentication .sasl.qop |
auth |
Sasl QOP enable higher levels of protection for Kyuubi communication with clients.
|
string |
1.0.0 |
Backend
| Key | Default | Meaning | Type | Since |
|---|---|---|---|---|
| kyuubi.backend.engine .exec.pool.keepalive .time |
PT1M |
Time(ms) that an idle async thread of the operation execution thread pool will wait for a new task to arrive before terminating in SQL engine applications |
duration |
1.0.0 |
| kyuubi.backend.engine .exec.pool.shutdown .timeout |
PT10S |
Timeout(ms) for the operation execution thread pool to terminate in SQL engine applications |
duration |
1.0.0 |
| kyuubi.backend.engine .exec.pool.size |
100 |
Number of threads in the operation execution thread pool of SQL engine applications |
int |
1.0.0 |
| kyuubi.backend.engine .exec.pool.wait.queue .size |
100 |
Size of the wait queue for the operation execution thread pool in SQL engine applications |
int |
1.0.0 |
| kyuubi.backend.server .event.json.log.path |
file:///tmp/kyuubi/events |
The location of server events go for the builtin JSON logger |
string |
1.4.0 |
| kyuubi.backend.server .event.loggers |
A comma separated list of server history loggers, where session/operation etc events go.
|
seq |
1.4.0 |
|
| kyuubi.backend.server .exec.pool.keepalive .time |
PT1M |
Time(ms) that an idle async thread of the operation execution thread pool will wait for a new task to arrive before terminating in Kyuubi server |
duration |
1.0.0 |
| kyuubi.backend.server .exec.pool.shutdown .timeout |
PT10S |
Timeout(ms) for the operation execution thread pool to terminate in Kyuubi server |
duration |
1.0.0 |
| kyuubi.backend.server .exec.pool.size |
100 |
Number of threads in the operation execution thread pool of Kyuubi server |
int |
1.0.0 |
| kyuubi.backend.server .exec.pool.wait.queue .size |
100 |
Size of the wait queue for the operation execution thread pool of Kyuubi server |
int |
1.0.0 |
Credentials
| Key | Default | Meaning | Type | Since |
|---|---|---|---|---|
| kyuubi.credentials .hadoopfs.enabled |
true |
Whether to renew Hadoop filesystem delegation tokens |
boolean |
1.4.0 |
| kyuubi.credentials .hadoopfs.uris |
Extra Hadoop filesystem URIs for which to request delegation tokens. The filesystem that hosts fs.defaultFS does not need to be listed here. |
seq |
1.4.0 |
|
| kyuubi.credentials .hive.enabled |
true |
Whether to renew Hive metastore delegation token |
boolean |
1.4.0 |
| kyuubi.credentials .renewal.interval |
PT1H |
How often Kyuubi renews one user's delegation tokens |
duration |
1.4.0 |
| kyuubi.credentials .renewal.retry.wait |
PT1M |
How long to wait before retrying to fetch new credentials after a failure. |
duration |
1.4.0 |
Delegation
| Key | Default | Meaning | Type | Since |
|---|---|---|---|---|
| kyuubi.delegation.key .update.interval |
PT24H |
unused yet |
duration |
1.0.0 |
| kyuubi.delegation .token.gc.interval |
PT1H |
unused yet |
duration |
1.0.0 |
| kyuubi.delegation .token.max.lifetime |
PT168H |
unused yet |
duration |
1.0.0 |
| kyuubi.delegation .token.renew.interval |
PT168H |
unused yet |
duration |
1.0.0 |
Engine
| Key | Default | Meaning | Type | Since |
|---|---|---|---|---|
| kyuubi.engine .connection.url.use .hostname |
false |
(deprecated) When true, engine register with hostname to zookeeper. When spark run on k8s with cluster mode, set to false to ensure that server can connect to engine |
boolean |
1.3.0 |
| kyuubi.engine .deregister.exception .classes |
A comma separated list of exception classes. If there is any exception thrown, whose class matches the specified classes, the engine would deregister itself. |
seq |
1.2.0 |
|
| kyuubi.engine .deregister.exception .messages |
A comma separated list of exception messages. If there is any exception thrown, whose message or stacktrace matches the specified message list, the engine would deregister itself. |
seq |
1.2.0 |
|
| kyuubi.engine .deregister.exception .ttl |
PT30M |
Time to live(TTL) for exceptions pattern specified in kyuubi.engine.deregister.exception.classes and kyuubi.engine.deregister.exception.messages to deregister engines. Once the total error count hits the kyuubi.engine.deregister.job.max.failures within the TTL, an engine will deregister itself and wait for self-terminated. Otherwise, we suppose that the engine has recovered from temporary failures. |
duration |
1.2.0 |
| kyuubi.engine .deregister.job.max .failures |
4 |
Number of failures of job before deregistering the engine. |
int |
1.2.0 |
| kyuubi.engine.event .json.log.path |
file:///tmp/kyuubi/events |
The location of all the engine events go for the builtin JSON logger.
|
string |
1.3.0 |
| kyuubi.engine.event .loggers |
SPARK |
A comma separated list of engine history loggers, where engine/session/operation etc events go. We use spark logger by default.
|
seq |
1.3.0 |
| kyuubi.engine .initialize.sql |
SHOW DATABASES |
SemiColon-separated list of SQL statements to be initialized in the newly created engine before queries. i.e. use SHOW DATABASES to eagerly active HiveClient. This configuration can not be used in JDBC url due to the limitation of Beeline/JDBC driver. |
seq |
1.2.0 |
| kyuubi.engine .operation.log.dir .root |
engine_operation_logs |
Root directory for query operation log at engine-side. |
string |
1.4.0 |
| kyuubi.engine.pool .name |
engine-pool |
The name of engine pool. |
string |
1.5.0 |
| kyuubi.engine.pool .size |
-1 |
The size of engine pool. Note that, if the size is less than 1, the engine pool will not be enabled; otherwise, the size of the engine pool will be min(this, kyuubi.engine.pool.size.threshold). |
int |
1.4.0 |
| kyuubi.engine.pool .size.threshold |
9 |
This parameter is introduced as a server-side parameter, and controls the upper limit of the engine pool. |
int |
1.4.0 |
| kyuubi.engine.session .initialize.sql |
SemiColon-separated list of SQL statements to be initialized in the newly created engine session before queries. This configuration can not be used in JDBC url due to the limitation of Beeline/JDBC driver. |
seq |
1.3.0 |
|
| kyuubi.engine.share .level |
USER |
Engines will be shared in different levels, available configs are:
|
string |
1.2.0 |
| kyuubi.engine.share .level.sub.domain |
<undefined> |
(deprecated) - Using kyuubi.engine.share.level.subdomain instead |
string |
1.2.0 |
| kyuubi.engine.share .level.subdomain |
<undefined> |
Allow end-users to create a subdomain for the share level of an engine. A subdomain is a case-insensitive string values that must be a valid zookeeper sub path. For example, for USER share level, an end-user can share a certain engine within a subdomain, not for all of its clients. End-users are free to create multiple engines in the USER share level. When disable engine pool, use 'default' if absent. |
string |
1.4.0 |
| kyuubi.engine.single .spark.session |
false |
When set to true, this engine is running in a single session mode. All the JDBC/ODBC connections share the temporary views, function registries, SQL configuration and the current database. |
boolean |
1.3.0 |
| kyuubi.engine.type | SPARK_SQL |
Specify the detailed engine that supported by the Kyuubi. The engine type bindings to SESSION scope. This configuration is experimental. Currently, available configs are:
|
string |
1.4.0 |
| kyuubi.engine.ui .retainedSessions |
200 |
The number of SQL client sessions kept in the Kyuubi Query Engine web UI. |
int |
1.4.0 |
| kyuubi.engine.ui .retainedStatements |
200 |
The number of statements kept in the Kyuubi Query Engine web UI. |
int |
1.4.0 |
| kyuubi.engine.ui.stop .enabled |
true |
When true, allows Kyuubi engine to be killed from the Spark Web UI. |
boolean |
1.3.0 |
Frontend
| Key | Default | Meaning | Type | Since |
|---|---|---|---|---|
| kyuubi.frontend .backoff.slot.length |
PT0.1S |
(deprecated) Time to back off during login to the thrift frontend service. |
duration |
1.0.0 |
| kyuubi.frontend.bind .host |
<undefined> |
(deprecated) Hostname or IP of the machine on which to run the thrift frontend service via binary protocol. |
string |
1.0.0 |
| kyuubi.frontend.bind .port |
10009 |
(deprecated) Port of the machine on which to run the thrift frontend service via binary protocol. |
int |
1.0.0 |
| kyuubi.frontend .connection.url.use .hostname |
false |
When true, frontend services prefer hostname, otherwise, ip address |
boolean |
1.5.0 |
| kyuubi.frontend.login .timeout |
PT20S |
(deprecated) Timeout for Thrift clients during login to the thrift frontend service. |
duration |
1.0.0 |
| kyuubi.frontend.max .message.size |
104857600 |
(deprecated) Maximum message size in bytes a Kyuubi server will accept. |
int |
1.0.0 |
| kyuubi.frontend.max .worker.threads |
999 |
(deprecated) Maximum number of threads in the of frontend worker thread pool for the thrift frontend service |
int |
1.0.0 |
| kyuubi.frontend.min .worker.threads |
9 |
(deprecated) Minimum number of threads in the of frontend worker thread pool for the thrift frontend service |
int |
1.0.0 |
| kyuubi.frontend.mysql .bind.host |
<undefined> |
Hostname or IP of the machine on which to run the MySQL frontend service. |
string |
1.4.0 |
| kyuubi.frontend.mysql .bind.port |
3309 |
Port of the machine on which to run the MySQL frontend service. |
int |
1.4.0 |
| kyuubi.frontend.mysql .max.worker.threads |
999 |
Maximum number of threads in the command execution thread pool for the MySQL frontend service |
int |
1.4.0 |
| kyuubi.frontend.mysql .min.worker.threads |
9 |
Minimum number of threads in the command execution thread pool for the MySQL frontend service |
int |
1.4.0 |
| kyuubi.frontend.mysql .netty.worker.threads |
<undefined> |
Number of thread in the netty worker event loop of MySQL frontend service. Use min(cpu_cores, 8) in default. |
int |
1.4.0 |
| kyuubi.frontend.mysql .worker.keepalive.time |
PT1M |
Time(ms) that an idle async thread of the command execution thread pool will wait for a new task to arrive before terminating in MySQL frontend service |
duration |
1.4.0 |
| kyuubi.frontend .protocols |
THRIFT_BINARY |
A comma separated list for all frontend protocols
|
seq |
1.4.0 |
| kyuubi.frontend.rest .bind.host |
<undefined> |
Hostname or IP of the machine on which to run the REST frontend service. |
string |
1.4.0 |
| kyuubi.frontend.rest .bind.port |
10099 |
Port of the machine on which to run the REST frontend service. |
int |
1.4.0 |
| kyuubi.frontend .thrift.backoff.slot .length |
PT0.1S |
Time to back off during login to the thrift frontend service. |
duration |
1.4.0 |
| kyuubi.frontend .thrift.binary.bind .host |
<undefined> |
Hostname or IP of the machine on which to run the thrift frontend service via binary protocol. |
string |
1.4.0 |
| kyuubi.frontend .thrift.binary.bind .port |
10009 |
Port of the machine on which to run the thrift frontend service via binary protocol. |
int |
1.4.0 |
| kyuubi.frontend .thrift.login.timeout |
PT20S |
Timeout for Thrift clients during login to the thrift frontend service. |
duration |
1.4.0 |
| kyuubi.frontend .thrift.max.message .size |
104857600 |
Maximum message size in bytes a Kyuubi server will accept. |
int |
1.4.0 |
| kyuubi.frontend .thrift.max.worker .threads |
999 |
Maximum number of threads in the of frontend worker thread pool for the thrift frontend service |
int |
1.4.0 |
| kyuubi.frontend .thrift.min.worker .threads |
9 |
Minimum number of threads in the of frontend worker thread pool for the thrift frontend service |
int |
1.4.0 |
| kyuubi.frontend .thrift.worker .keepalive.time |
PT1M |
Keep-alive time (in milliseconds) for an idle worker thread |
duration |
1.4.0 |
| kyuubi.frontend .worker.keepalive.time |
PT1M |
(deprecated) Keep-alive time (in milliseconds) for an idle worker thread |
duration |
1.0.0 |
Ha
| Key | Default | Meaning | Type | Since |
|---|---|---|---|---|
| kyuubi.ha.zookeeper .acl.enabled |
false |
Set to true if the zookeeper ensemble is kerberized |
boolean |
1.0.0 |
| kyuubi.ha.zookeeper .auth.digest |
<undefined> |
The digest auth string is used for zookeeper authentication, like: username:password. |
string |
1.3.2 |
| kyuubi.ha.zookeeper .auth.keytab |
<undefined> |
Location of Kyuubi server's keytab is used for zookeeper authentication. |
string |
1.3.2 |
| kyuubi.ha.zookeeper .auth.principal |
<undefined> |
Name of the Kerberos principal is used for zookeeper authentication. |
string |
1.3.2 |
| kyuubi.ha.zookeeper .auth.type |
NONE |
The type of zookeeper authentication, all candidates are
|
string |
1.3.2 |
| kyuubi.ha.zookeeper .connection.base.retry .wait |
1000 |
Initial amount of time to wait between retries to the zookeeper ensemble |
int |
1.0.0 |
| kyuubi.ha.zookeeper .connection.max .retries |
3 |
Max retry times for connecting to the zookeeper ensemble |
int |
1.0.0 |
| kyuubi.ha.zookeeper .connection.max.retry .wait |
30000 |
Max amount of time to wait between retries for BOUNDED_EXPONENTIAL_BACKOFF policy can reach, or max time until elapsed for UNTIL_ELAPSED policy to connect the zookeeper ensemble |
int |
1.0.0 |
| kyuubi.ha.zookeeper .connection.retry .policy |
EXPONENTIAL_BACKOFF |
The retry policy for connecting to the zookeeper ensemble, all candidates are:
|
string |
1.0.0 |
| kyuubi.ha.zookeeper .connection.timeout |
15000 |
The timeout(ms) of creating the connection to the zookeeper ensemble |
int |
1.0.0 |
| kyuubi.ha.zookeeper .engine.auth.type |
NONE |
The type of zookeeper authentication for engine, all candidates are
|
string |
1.3.2 |
| kyuubi.ha.zookeeper .namespace |
kyuubi |
The root directory for the service to deploy its instance uri |
string |
1.0.0 |
| kyuubi.ha.zookeeper .node.creation.timeout |
PT2M |
Timeout for creating zookeeper node |
duration |
1.2.0 |
| kyuubi.ha.zookeeper .publish.configs |
false |
When set to true, publish Kerberos configs to Zookeeper.Note that the Hive driver needs to be greater than 1.3 or 2.0 or apply HIVE-11581 patch. |
boolean |
1.4.0 |
| kyuubi.ha.zookeeper .quorum |
The connection string for the zookeeper ensemble |
string |
1.0.0 |
|
| kyuubi.ha.zookeeper .session.timeout |
60000 |
The timeout(ms) of a connected session to be idled |
int |
1.0.0 |
Kinit
| Key | Default | Meaning | Type | Since |
|---|---|---|---|---|
| kyuubi.kinit.interval | PT1H |
How often will Kyuubi server run kinit -kt [keytab] [principal] to renew the local Kerberos credentials cache |
duration |
1.0.0 |
| kyuubi.kinit.keytab | <undefined> |
Location of Kyuubi server's keytab. |
string |
1.0.0 |
| kyuubi.kinit.max .attempts |
10 |
How many times will kinit process retry |
int |
1.0.0 |
| kyuubi.kinit .principal |
<undefined> |
Name of the Kerberos principal. |
string |
1.0.0 |
Metrics
| Key | Default | Meaning | Type | Since |
|---|---|---|---|---|
| kyuubi.metrics .console.interval |
PT5S |
How often should report metrics to console |
duration |
1.2.0 |
| kyuubi.metrics .enabled |
true |
Set to true to enable kyuubi metrics system |
boolean |
1.2.0 |
| kyuubi.metrics.json .interval |
PT5S |
How often should report metrics to json file |
duration |
1.2.0 |
| kyuubi.metrics.json .location |
metrics |
Where the json metrics file located |
string |
1.2.0 |
| kyuubi.metrics .prometheus.path |
/metrics |
URI context path of prometheus metrics HTTP server |
string |
1.2.0 |
| kyuubi.metrics .prometheus.port |
10019 |
Prometheus metrics HTTP server port |
int |
1.2.0 |
| kyuubi.metrics .reporters |
JSON |
A comma separated list for all metrics reporters
|
seq |
1.2.0 |
| kyuubi.metrics.slf4j .interval |
PT5S |
How often should report metrics to SLF4J logger |
duration |
1.2.0 |
Operation
| Key | Default | Meaning | Type | Since |
|---|---|---|---|---|
| kyuubi.operation.idle .timeout |
PT3H |
Operation will be closed when it's not accessed for this duration of time |
duration |
1.0.0 |
| kyuubi.operation .interrupt.on.cancel |
true |
When true, all running tasks will be interrupted if one cancels a query. When false, all running tasks will remain until finished. |
boolean |
1.2.0 |
| kyuubi.operation .language |
SQL |
Choose a programing language for the following inputs
|
string |
1.5.0 |
| kyuubi.operation.log .dir.root |
server_operation_logs |
Root directory for query operation log at server-side. |
string |
1.4.0 |
| kyuubi.operation.plan .only.mode |
NONE |
Whether to perform the statement in a PARSE, ANALYZE, OPTIMIZE only way without executing the query. When it is NONE, the statement will be fully executed |
string |
1.4.0 |
| kyuubi.operation .query.timeout |
<undefined> |
Timeout for query executions at server-side, take affect with client-side timeout( java.sql.Statement.setQueryTimeout) together, a running query will be cancelled automatically if timeout. It's off by default, which means only client-side take fully control whether the query should timeout or not. If set, client-side timeout capped at this point. To cancel the queries right away without waiting task to finish, consider enabling kyuubi.operation.interrupt.on.cancel together. |
duration |
1.2.0 |
| kyuubi.operation .scheduler.pool |
<undefined> |
The scheduler pool of job. Note that, this config should be used after change Spark config spark.scheduler.mode=FAIR. |
string |
1.1.1 |
| kyuubi.operation .status.polling.max .attempts |
5 |
Max attempts for long polling asynchronous running sql query's status on raw transport failures, e.g. TTransportException |
int |
1.4.0 |
| kyuubi.operation .status.polling .timeout |
PT5S |
Timeout(ms) for long polling asynchronous running sql query's status |
duration |
1.0.0 |
Server
| Key | Default | Meaning | Type | Since |
|---|---|---|---|---|
| kyuubi.server.name | <undefined> |
The name of Kyuubi Server. |
string |
1.5.0 |
Session
| Key | Default | Meaning | Type | Since |
|---|---|---|---|---|
| kyuubi.session.check .interval |
PT5M |
The check interval for session timeout. |
duration |
1.0.0 |
| kyuubi.session.conf .ignore.list |
A comma separated list of ignored keys. If the client connection contains any of them, the key and the corresponding value will be removed silently during engine bootstrap and connection setup. Note that this rule is for server-side protection defined via administrators to prevent some essential configs from tampering but will not forbid users to set dynamic configurations via SET syntax. |
seq |
1.2.0 |
|
| kyuubi.session.conf .restrict.list |
A comma separated list of restricted keys. If the client connection contains any of them, the connection will be rejected explicitly during engine bootstrap and connection setup. Note that this rule is for server-side protection defined via administrators to prevent some essential configs from tampering but will not forbid users to set dynamic configurations via SET syntax. |
seq |
1.2.0 |
|
| kyuubi.session.engine .check.interval |
PT1M |
The check interval for engine timeout |
duration |
1.0.0 |
| kyuubi.session.engine .flink.main.resource |
<undefined> |
The package used to create Flink SQL engine remote job. If it is undefined, Kyuubi will use the default |
string |
1.4.0 |
| kyuubi.session.engine .idle.timeout |
PT30M |
engine timeout, the engine will self-terminate when it's not accessed for this duration. 0 or negative means not to self-terminate. |
duration |
1.0.0 |
| kyuubi.session.engine .initialize.timeout |
PT3M |
Timeout for starting the background engine, e.g. SparkSQLEngine. |
duration |
1.0.0 |
| kyuubi.session.engine .launch.async |
true |
When opening kyuubi session, whether to launch backend engine asynchronously. When true, the Kyuubi server will set up the connection with the client without delay as the backend engine will be created asynchronously. |
boolean |
1.4.0 |
| kyuubi.session.engine .log.timeout |
PT24H |
If we use Spark as the engine then the session submit log is the console output of spark-submit. We will retain the session submit log until over the config value. |
duration |
1.1.0 |
| kyuubi.session.engine .login.timeout |
PT15S |
The timeout of creating the connection to remote sql query engine |
duration |
1.0.0 |
| kyuubi.session.engine .request.timeout |
PT1M |
The timeout of awaiting response after sending request to remote sql query engine |
duration |
1.4.0 |
| kyuubi.session.engine .share.level |
USER |
(deprecated) - Using kyuubi.engine.share.level instead |
string |
1.0.0 |
| kyuubi.session.engine .spark.main.resource |
<undefined> |
The package used to create Spark SQL engine remote application. If it is undefined, Kyuubi will use the default |
string |
1.0.0 |
| kyuubi.session.engine .startup.error.max .size |
8192 |
During engine bootstrapping, if error occurs, using this config to limit the length error message(characters). |
int |
1.1.0 |
| kyuubi.session.engine .startup.maxLogLines |
10 |
The maximum number of engine log lines when errors occur during engine startup phase. Note that this max lines is for client-side to help track engine startup issue. |
int |
1.4.0 |
| kyuubi.session.engine .trino.connection .catalog |
<undefined> |
The default catalog that trino engine will connect to |
string |
1.5.0 |
| kyuubi.session.engine .trino.connection.url |
<undefined> |
The server url that trino engine will connect to |
string |
1.5.0 |
| kyuubi.session.engine .trino.main.resource |
<undefined> |
The package used to create Trino engine remote job. If it is undefined, Kyuubi will use the default |
string |
1.5.0 |
| kyuubi.session.idle .timeout |
PT6H |
session idle timeout, it will be closed when it's not accessed for this duration |
duration |
1.2.0 |
| kyuubi.session.name | <undefined> |
A human readable name of session and we use empty string by default. This name will be recorded in event. Note that, we only apply this value from session conf. |
string |
1.4.0 |
| kyuubi.session .timeout |
PT6H |
(deprecated)session timeout, it will be closed when it's not accessed for this duration |
duration |
1.0.0 |
Zookeeper
| Key | Default | Meaning | Type | Since |
|---|---|---|---|---|
| kyuubi.zookeeper .embedded.client.port |
2181 |
clientPort for the embedded zookeeper server to listen for client connections, a client here could be Kyuubi server, engine and JDBC client |
int |
1.2.0 |
| kyuubi.zookeeper .embedded.client.port .address |
<undefined> |
clientPortAddress for the embedded zookeeper server to |
string |
1.2.0 |
| kyuubi.zookeeper .embedded.data.dir |
embedded_zookeeper |
dataDir for the embedded zookeeper server where stores the in-memory database snapshots and, unless specified otherwise, the transaction log of updates to the database. |
string |
1.2.0 |
| kyuubi.zookeeper .embedded.data.log.dir |
embedded_zookeeper |
dataLogDir for the embedded zookeeper server where writes the transaction log . |
string |
1.2.0 |
| kyuubi.zookeeper .embedded.directory |
embedded_zookeeper |
The temporary directory for the embedded zookeeper server |
string |
1.0.0 |
| kyuubi.zookeeper .embedded.max.client .connections |
120 |
maxClientCnxns for the embedded zookeeper server to limits the number of concurrent connections of a single client identified by IP address |
int |
1.2.0 |
| kyuubi.zookeeper .embedded.max.session .timeout |
60000 |
maxSessionTimeout in milliseconds for the embedded zookeeper server will allow the client to negotiate. Defaults to 20 times the tickTime |
int |
1.2.0 |
| kyuubi.zookeeper .embedded.min.session .timeout |
6000 |
minSessionTimeout in milliseconds for the embedded zookeeper server will allow the client to negotiate. Defaults to 2 times the tickTime |
int |
1.2.0 |
| kyuubi.zookeeper .embedded.port |
2181 |
The port of the embedded zookeeper server |
int |
1.0.0 |
| kyuubi.zookeeper .embedded.tick.time |
3000 |
tickTime in milliseconds for the embedded zookeeper server |
int |
1.2.0 |
Spark Configurations
Via spark-defaults.conf
Setting them in $SPARK_HOME/conf/spark-defaults.conf supplies with default values for SQL engine application. Available properties can be found at Spark official online documentation for Spark Configurations
Via kyuubi-defaults.conf
Setting them in $KYUUBI_HOME/conf/kyuubi-defaults.conf supplies with default values for SQL engine application too. These properties will override all settings in $SPARK_HOME/conf/spark-defaults.conf
Via JDBC Connection URL
Setting them in the JDBC Connection URL supplies session-specific for each SQL engine. For example: jdbc:hive2://localhost:10009/default;#spark.sql.shuffle.partitions=2;spark.executor.memory=5g
-
Runtime SQL Configuration
- For Runtime SQL Configurations, they will take affect every time
-
Static SQL and Spark Core Configuration
- For Static SQL Configurations and other spark core configs, e.g.
spark.executor.memory, they will take affect if there is no existing SQL engine application. Otherwise, they will just be ignored
- For Static SQL Configurations and other spark core configs, e.g.
Via SET Syntax
Please refer to the Spark official online documentation for SET Command
Logging
Kyuubi uses log4j for logging. You can configure it using $KYUUBI_HOME/conf/log4j2.properties.
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Set everything to be logged to the file target/unit-tests.log
rootLogger.level = info
rootLogger.appenderRef.stdout.ref = STDOUT
# Console Appender
appender.console.type = Console
appender.console.name = STDOUT
appender.console.target = SYSTEM_OUT
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = %d{HH:mm:ss.SSS} %p %c: %m%n
appender.console.filter.1.type = Filters
appender.console.filter.1.a.type = ThresholdFilter
appender.console.filter.1.a.level = info
# SPARK-34128: Suppress undesirable TTransportException warnings, due to THRIFT-4805
appender.console.filter.1.b.type = RegexFilter
appender.console.filter.1.b.regex = .*Thrift error occurred during processing of message.*
appender.console.filter.1.b.onMatch = deny
appender.console.filter.1.b.onMismatch = neutral
# Set the default kyuubi-ctl log level to WARN. When running the kyuubi-ctl, the
# log level for this class is used to overwrite the root logger's log level.
logger.ctl.name = org.apache.kyuubi.ctl.ServiceControlCli
logger.ctl.level = error
# Analysis MySQLFrontend protocol traffic
# logger.mysql.name = org.apache.kyuubi.server.mysql.codec
# logger.mysql.level = trace
# Kyuubi BeeLine
logger.beeline.name = org.apache.hive.beeline.KyuubiBeeLine
logger.beeline.level = error
Other Configurations
Hadoop Configurations
Specifying HADOOP_CONF_DIR to the directory contains hadoop configuration files or treating them as Spark properties with a spark.hadoop. prefix. Please refer to the Spark official online documentation for Inheriting Hadoop Cluster Configuration. Also, please refer to the Apache Hadoop's online documentation for an overview on how to configure Hadoop.
Hive Configurations
These configurations are used for SQL engine application to talk to Hive MetaStore and could be configured in a hive-site.xml. Placed it in $SPARK_HOME/conf directory, or treating them as Spark properties with a spark.hadoop. prefix.
User Defaults
In Kyuubi, we can configure user default settings to meet separate needs. These user defaults override system defaults, but will be overridden by those from JDBC Connection URL or Set Command if could be. They will take effect when creating the SQL engine application ONLY.
User default settings are in the form of ___{username}___.{config key}. There are three continuous underscores(_) at both sides of the username and a dot(.) that separates the config key and the prefix. For example:
# For system defaults
spark.master=local
spark.sql.adaptive.enabled=true
# For a user named kent
___kent___.spark.master=yarn
___kent___.spark.sql.adaptive.enabled=false
# For a user named bob
___bob___.spark.master=spark://master:7077
___bob___.spark.executor.memory=8g
In the above case, if there are related configurations from JDBC Connection URL, kent will run his SQL engine application on YARN and prefer the Spark AQE to be off, while bob will activate his SQL engine application on a Spark standalone cluster with 8g heap memory for each executor and obey the Spark AQE behavior of Kyuubi system default. On the other hand, for those users who do not have custom configurations will use system defaults.
