[KYUUBI #3772] Switch default catalog to tpcds in playground

### _Why are the changes needed?_

Kyuubi JDBC driver supports setting catalog in JDBC url since #3516, which is not released yet. To unblock third party apps integrate w/ Kyuubi, I propose to change the default catalog of playground to tpcds, so that the client can see the tables after starting playground immediately.

### _How was this patch tested?_
- [ ] Add some test cases that check the changes thoroughly including negative and positive cases if possible

- [ ] Add screenshots for manual tests if appropriate

- [ ] [Run test](https://kyuubi.apache.org/docs/latest/develop_tools/testing.html#running-tests) locally before make a pull request

Closes #3772 from pan3793/catalog-tpcds.

Closes #3772

99670366 [Cheng Pan] load
c590c636 [Cheng Pan] Switch default catalog to tpcds in playground

Authored-by: Cheng Pan <chengpan@apache.org>
Signed-off-by: Cheng Pan <chengpan@apache.org>
This commit is contained in:
Cheng Pan 2022-11-07 18:33:47 +08:00
parent 20fca4cfa4
commit a9c2547903
No known key found for this signature in database
GPG Key ID: 8001952629BCC75D
3 changed files with 5 additions and 5 deletions

View File

@ -16,9 +16,9 @@
SET tiny_schema=tpcds.tiny;
CREATE DATABASE IF NOT EXISTS tpcds_tiny;
CREATE DATABASE IF NOT EXISTS spark_catalog.tpcds_tiny;
USE tpcds_tiny;
USE spark_catalog.tpcds_tiny;
--
-- Name: catalog_sales; Type: TABLE; Tablespace:

View File

@ -16,9 +16,9 @@
SET tiny_schema=tpch.tiny;
CREATE DATABASE IF NOT EXISTS tpch_tiny;
CREATE DATABASE IF NOT EXISTS spark_catalog.tpch_tiny;
USE tpch_tiny;
USE spark_catalog.tpch_tiny;
--
-- Name: customer; Type: TABLE; Tablespace:

View File

@ -22,7 +22,7 @@ spark.driver.host=0.0.0.0
spark.ui.port=4040
spark.sql.shuffle.partitions=16
spark.sql.warehouse.dir=s3a://spark-bucket/warehouse
spark.sql.defaultCatalog=spark_catalog
spark.sql.defaultCatalog=tpcds
spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions
spark.hadoop.fs.s3a.bucket.spark-bucket.committer.magic.enabled=true
spark.hadoop.fs.s3a.bucket.iceberg-bucket.committer.magic.enabled=true