# 🔍 Description This pull request aims to remove building support for Spark 3.2, while still keeping the engine support for Spark 3.2. Mailing list discussion: https://lists.apache.org/thread/l74n5zl1w7s0bmr5ovxmxq58yqy8hqzc - Remove Maven profile `spark-3.2`, and references on docs, release scripts, etc. - Keep the cross-version verification to ensure that the Spark SQL engine built on the default Spark version (3.5) still works well on Spark 3.2 runtime. - Merge `kyuubi-extension-spark-common` into `kyuubi-extension-spark-3-3` - Remove `log4j.properties` as Spark moves to Log4j2 since 3.3 (SPARK-37814) ## Types of changes 🔖 - [ ] Bugfix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [x] Breaking change (fix or feature that would cause existing functionality to change) ## Test Plan 🧪 Pass GHA. --- # Checklist 📝 - [x] This patch was not authored or co-authored using [Generative Tooling](https://www.apache.org/legal/generative-tooling.html) **Be nice. Be informative.** Closes #6545 from pan3793/deprecate-spark-3.2. Closes #6545 54c172528 [Cheng Pan] fix f4602e805 [Cheng Pan] Deprecate and remove building support for Spark 3.2 2e083f89f [Cheng Pan] fix style 458a92c53 [Cheng Pan] nit 929e1df36 [Cheng Pan] Deprecate and remove building support for Spark 3.2 Authored-by: Cheng Pan <chengpan@apache.org> Signed-off-by: Cheng Pan <chengpan@apache.org>
4.0 KiB
Z-Ordering Support
To improve query speed, Kyuubi supports Z-Ordering to optimize the layout of data stored in all kind of storage with various data format.
Please check our benchmark report here.
Introduction
The following picture shows the workflow of z-order.
It contains three parties:
- Upstream
Due to the extra sort, the upstream job will run a little slower than before
-
Table
Z-order has the good data clustering, so the compression ratio can be improved
-
Downstream
Improve the downstream read performance benefit from data skipping. Since the parquet and orc file support collect data statistic automatically when you write data e.g. minimum and maximum values, the good data clustering let the pushed down filter more efficient
Supported table format
| Table Format | Supported |
|---|---|
| parquet | Y |
| orc | Y |
| json | N |
| csv | N |
| text | N |
Supported column data type
| Column Data Type | Supported |
|---|---|
| byte | Y |
| short | Y |
| int | Y |
| long | Y |
| float | Y |
| double | Y |
| boolean | Y |
| string | Y |
| decimal | Y |
| date | Y |
| timestamp | Y |
| array | N |
| map | N |
| struct | N |
| udt | N |
How to use
This feature is inside Kyuubi extension, so you should apply the extension to Spark by following steps.
- add extension jar:
copy $KYUUBI_HOME/extension/kyuubi-extension-spark-3-5* $SPARK_HOME/jars/ - add config into
spark-defaults.conf:spark.sql.extensions=org.apache.kyuubi.sql.KyuubiSparkSQLExtension
Optimize history data
If you want to optimize the history data of a table, the OPTIMIZE ... syntax is good to go. Due to Spark SQL doesn't support read and overwrite same datasource table, the syntax can only support to optimize Hive table.
Syntax
OPTIMIZE table_name [WHERE predicate] ZORDER BY col_name1 [, ...]
Note that, the predicate only supports partition spec.
Examples
OPTIMIZE t1 ZORDER BY c3;
OPTIMIZE t1 ZORDER BY c1,c2;
OPTIMIZE t1 WHERE day = '2021-12-01' ZORDER BY c1,c2;
Optimize incremental data
Kyuubi supports optimize a table automatically for incremental data. e.g., time partitioned table. The only things you need to do is adding Kyuubi properties into the target table properties:
ALTER TABLE t1 SET TBLPROPERTIES('kyuubi.zorder.enabled'='true','kyuubi.zorder.cols'='c1,c2');
- the key
kyuubi.zorder.enableddecide if the table allows Kyuubi to optimize by z-order. - the key
kyuubi.zorder.colsdecide which columns are used to optimize by z-order.
Kyuubi will detect the properties and optimize SQL using z-order during SQL compilation, so you can enjoy z-order with all writing table command like:
INSERT INTO TABLE t1 PARTITION() ...;
INSERT OVERWRITE TABLE t1 PARTITION() ...;
CREATE TABLE t1 AS SELECT ...;
