kyuubi/docs/security/hadoop_credentials_manager.md
zhouyifan279 f6059a9b30
[KYUUBI #1090] Add deployment document about Hadoop Credentials Manager
### _Why are the changes needed?_
Umbrella issue #915 is finished.  Document is needed to tell user how to enable it.

### _How was this patch tested?_
- [ ] Add some test cases that check the changes thoroughly including negative and positive cases if possible

- [ ] Add screenshots for manual tests if appropriate

- [ ] [Run test](https://kyuubi.readthedocs.io/en/latest/develop_tools/testing.html#running-tests) locally before make a pull request

Closes #1101 from zhouyifan279/KYUUBI#1090.

Closes #1090

fdd4da8a [zhouyifan279] [KYUUBI #1090] Add deployment document about Hadoop Credentials Manager
418da1dc [zhouyifan279] [KYUUBI #1090] Add deployment document about Hadoop Credentials Manager
0e8e44ba [zhouyifan279] [KYUUBI #1090] Add deployment document about Hadoop Credentials Manager

Authored-by: zhouyifan279 <zhouyifan279@gmail.com>
Signed-off-by: Kent Yao <yao@apache.org>
2021-09-15 10:02:09 +08:00

4.6 KiB

Hadoop Credentials Manager

In order to pass the authentication of a kerberos secured hadoop cluster, kyuubi currently submits engines in two ways:

  1. Submits with current kerberos user and extra SparkSubmit argument --proxy-user.
  2. Submits with spark.kerberos.principal and spark.kerberos.keytab specified.

If engine is submitted with --proxy-user specified, its delegation tokens of hadoop cluster services are obtained by current kerberos user and can not be renewed by itself.
Thus, engine's lifetime is limited by the lifetime of delegation tokens.
To remove this limitation, kyuubi renews delegation tokens at server side in Hadoop Credentials Manager.

Engine submitted with principal and keytab can renew delegation tokens by itself. But for implementation simplicity, kyuubi server will also renew delegation tokens for it.

Configurations

Cluster Services

Kyuubi currently supports renew delegation tokens of Hadoop filesystems and Hive metastore servers.

Hadoop client configurations

Set HADOOP_CONF_DIR in $KYUUBI_HOME/conf/kyuubi-env.sh if it hasn't been set yet, e.g.

$ echo "export HADOOP_CONF_DIR=/path/to/hadoop/conf" >> $KYUUBI_HOME/conf/kyuubi-env.sh

Extra Hadoop filesystems can be specified in $KYUUBI_HOME/conf/kyuubi-defaults.conf by kyuubi.credentials.hadoopfs.uris in comma separated list.

Hive metastore configurations

Via kyuubi-defaults.conf

Specify Hive metastore configurations In $KYUUBI_HOME/conf/kyuubi-defaults.conf. Hadoop Credentials Manager will load the configurations when initialized.

Via hive-site.xml

Place your copy of hive-site.xml into $KYUUBI_HOME/conf, Kyuubi will load this config file to its classpath.

This version of configuration has lower priority than those in $KYUUBI_HOME/conf/kyuubi-defaults.conf.

Via JDBC Connection URL

Hive configurations specified in JDBC connection URL are ignored by Hadoop Credentials Manager as Hadoop Credentials Manager is initialized when Kyuubi server starts.

Credentials Renewal

Key Default Meaning Type Since
kyuubi.credentials
.hadoopfs.enabled
true
Whether to renew Hadoop filesystem delegation tokens
boolean
1.4.0
kyuubi.credentials
.hadoopfs.uris
Extra Hadoop filesystem URIs for which to request delegation tokens. The filesystem that hosts fs.defaultFS does not need to be listed here.
seq
1.4.0
kyuubi.credentials
.hive.enabled
true
Whether to renew Hive metastore delegation token
boolean
1.4.0
kyuubi.credentials
.renewal.interval
PT1H
How often Kyuubi renews one user's delegation tokens
duration
1.4.0
kyuubi.credentials
.renewal.retry.wait
PT1M
How long to wait before retrying to fetch new credentials after a failure.
duration
1.4.0