kyuubi/docs/sql/z-order-benchmark.md
Cheng Pan 589b23f578
[KYUUBI #1257] [BUILD] Reduce unnecessary maven profile
<!--
Thanks for sending a pull request!

Here are some tips for you:
  1. If this is your first time, please read our contributor guidelines: https://kyuubi.readthedocs.io/en/latest/community/contributions.html
  2. If the PR is related to an issue in https://github.com/apache/incubator-kyuubi/issues, add '[KYUUBI #XXXX]' in your PR title, e.g., '[KYUUBI #XXXX] Your PR title ...'.
  3. If the PR is unfinished, add '[WIP]' in your PR title, e.g., '[WIP][KYUUBI #XXXX] Your PR title ...'.
-->

### _Why are the changes needed?_
<!--
Please clarify why the changes are needed. For instance,
  1. If you add a feature, you can talk about the use case of it.
  2. If you fix a bug, you can clarify why it is a bug.
-->
The changes based on https://github.com/apache/incubator-kyuubi/pull/1226#issuecomment-942979767

In this PR, I'm going to remove profiles `kyuubi-extension-spark-3-1` and `kyuubi-extension-spark-3-2`, and keep `spark-3.0`, `spark-3.1`, `spark-3.2`.

After changes, when `spark-3.1` is active, set `spark.version` to 3.1.2 and enable `kyuubi-extension-spark-common` module and `kyuubi-extension-spark-3-1` module, `spark-3.2` does the same thing

### _How was this patch tested?_
- [ ] Add some test cases that check the changes thoroughly including negative and positive cases if possible

- [ ] Add screenshots for manual tests if appropriate

- [x] [Run test](https://kyuubi.readthedocs.io/en/latest/develop_tools/testing.html#running-tests) locally before make a pull request

Closes #1257 from pan3793/pom.

Closes #1257

1d871b6c [Cheng Pan] nit
874f30c1 [Cheng Pan] Address comments
00a32b25 [Cheng Pan] nit
445666ee [Cheng Pan] typo
bd96097e [Cheng Pan] nit
e85116cd [Cheng Pan] nit
98347c8c [Cheng Pan] Update doc
6615d7cc [Cheng Pan] Exclude HudiTest in cross test
087d3fde [Cheng Pan] Add spark 3.2 binary test
ccff63c1 [Cheng Pan] update GA
23b81dcd [Cheng Pan] Address comments
3055c2a8 [Cheng Pan] nit
49a41e86 [Cheng Pan] nit
8d400798 [Cheng Pan] Reduce unnecessary maven profile
ced8d987 [Cheng Pan] Reduce unnecessary maven profile

Lead-authored-by: Cheng Pan <chengpan@apache.org>
Co-authored-by: Cheng Pan <chengpan@ChengdeMac-mini.local>
Signed-off-by: ulysses-you <ulyssesyou@apache.org>
2021-10-22 14:51:42 +08:00

246 lines
9.5 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

<!--
- x to the Apache Software Foundation (ASF) under one or more
- contributor license agreements. See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License. You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
-->
<div align=center>
![](../imgs/kyuubi_logo.png)
</div>
# Z-order Benchmark
Z-order is a technique that allows you to map multidimensional data to a single dimension. We did a performance test
for this test ,we used aliyun Databricks Delta test case
https://help.aliyun.com/document_detail/168137.html?spm=a2c4g.11186623.6.563.10d758ccclYtVb
Prepare data for the three scenarios:
1. 10 billion data and 2 hundred filesparquet files: for big file(1G)
2. 10 billion data and 1 thousand filesparquet files: for medium file(200m)
3. one billion data and 10 hundred filesparquet files: for smaller file(200k)
test env
spark-3.1.2
hadoop-2.7.2
kyubbi-1.4.0
test step
Step1: create hive tables
```scala
spark.sql(s"drop database if exists $dbName cascade")
spark.sql(s"create database if not exists $dbName")
spark.sql(s"use $dbName")
spark.sql(s"create table $connRandomParquet (src_ip string, src_port int, dst_ip string, dst_port int) stored as parquet")
spark.sql(s"create table $connOrderbyOnlyIp (src_ip string, src_port int, dst_ip string, dst_port int) stored as parquet")
spark.sql(s"create table $connOrderby (src_ip string, src_port int, dst_ip string, dst_port int) stored as parquet")
spark.sql(s"create table $connZorderOnlyIp (src_ip string, src_port int, dst_ip string, dst_port int) stored as parquet")
spark.sql(s"create table $connZorder (src_ip string, src_port int, dst_ip string, dst_port int) stored as parquet")
spark.sql(s"show tables").show(false)
```
Step2 prepare data for parquet table with three scenarios
we use the following code
```scala
def randomIPv4(r: Random) = Seq.fill(4)(r.nextInt(256)).mkString(".")
def randomPort(r: Random) = r.nextInt(65536)
def randomConnRecord(r: Random) = ConnRecord(
src_ip = randomIPv4(r), src_port = randomPort(r),
dst_ip = randomIPv4(r), dst_port = randomPort(r))
```
Step3 do optimize with z-order only ip and do optimize with order by only ip sort column src_ip, dst_ip and shuffle partition just as file numbers .
```
INSERT overwrite table conn_order_only_ip select src_ip, src_port, dst_ip, dst_port from conn_random_parquet order by src_ip, dst_ip;
OPTIMIZE conn_zorder_only_ip ZORDER BY src_ip, dst_ip;
```
Step4 do optimize with z-order and do optimize with order by sort column src_ip, src_port, dst_ip, dst_port and shuffle partition just as file numbers .
```
INSERT overwrite table conn_order select src_ip, src_port, dst_ip, dst_port from conn_random_parquet order by src_ip, src_port, dst_ip, dst_port;
OPTIMIZE conn_zorder ZORDER BY src_ip, src_port, dst_ip, dst_port;
```
The complete code is as follows
```shell
./spark-shell
import org.apache.spark.SparkConf
import org.apache.spark.sql.SparkSession
case class ConnRecord(src_ip: String, src_port: Int, dst_ip: String, dst_port: Int)
val conf = new SparkConf().setAppName("zorder_test")
val spark = SparkSession.builder().config(conf).enableHiveSupport().getOrCreate()
import spark.implicits._
val sc = spark.sparkContext
sc.setLogLevel("WARN")
//ten billion rows and two hundred files
val numRecords = 10*1000*1000*1000L
val numFiles = 200
val dbName = s"zorder_test_$numFiles"
val baseLocation = s"hdfs://localhost:9000/zorder_test/$dbName/"
val connRandomParquet = "conn_random_parquet"
val connZorderOnlyIp = "conn_zorder_only_ip"
val connZorder = "conn_zorder"
spark.conf.set("spark.sql.shuffle.partitions", numFiles)
spark.conf.get("spark.sql.shuffle.partitions")
spark.conf.set("spark.sql.hive.convertMetastoreParquet",false)
spark.sql(s"drop database if exists $dbName cascade")
spark.sql(s"create database if not exists $dbName")
spark.sql(s"use $dbName")
spark.sql(s"create table $connRandomParquet (src_ip string, src_port int, dst_ip string, dst_port int) stored as parquet")
spark.sql(s"create table $connOrderbyOnlyIp (src_ip string, src_port int, dst_ip string, dst_port int) stored as parquet")
spark.sql(s"create table $connOrderby (src_ip string, src_port int, dst_ip string, dst_port int) stored as parquet")
spark.sql(s"create table $connZorderOnlyIp (src_ip string, src_port int, dst_ip string, dst_port int) stored as parquet")
spark.sql(s"create table $connZorder (src_ip string, src_port int, dst_ip string, dst_port int) stored as parquet")
spark.sql(s"show tables").show(false)
import scala.util.Random
// Function for preparing Zorder_Test data
def randomIPv4(r: Random) = Seq.fill(4)(r.nextInt(256)).mkString(".")
def randomPort(r: Random) = r.nextInt(65536)
def randomConnRecord(r: Random) = ConnRecord(
src_ip = randomIPv4(r), src_port = randomPort(r),
dst_ip = randomIPv4(r), dst_port = randomPort(r))
val df = spark.range(0, numFiles, 1, numFiles).mapPartitions { it =>
val partitionID = it.toStream.head
val r = new Random(seed = partitionID)
Iterator.fill((numRecords / numFiles).toInt)(randomConnRecord(r))
}
df.write
.mode("overwrite")
.format("parquet")
.insertInto(connRandomParquet)
spark.read.table(connRandomParquet)
.write
.mode("overwrite")
.format("parquet")
.insertInto(connZorderOnlyIp)
spark.read.table(connRandomParquet)
.write
.mode("overwrite")
.format("parquet")
.insertInto(connZorder)
spark.stop()
```
Z-order Optimize statement:
```sql
set spark.sql.hive.convertMetastoreParquet=false;
OPTIMIZE conn_zorder_only_ip ZORDER BY src_ip, dst_ip;
OPTIMIZE zorder_test.conn_zorder ZORDER BY src_ip, src_port, dst_ip, dst_port;
```
ORDER BY statement:
```
INSERT overwrite table conn_order_only_ip select src_ip, src_port, dst_ip, dst_port from conn_random_parquet order by src_ip, dst_ip;
INSERT overwrite table conn_order select src_ip, src_port, dst_ip, dst_port from conn_random_parquet order by src_ip, src_port, dst_ip, dst_port;
```
Query statement:
```sql
set spark.sql.hive.convertMetastoreParquet=true;
select count(*) from conn_random_parquet where src_ip like '157%' and dst_ip like '216.%';
select count(*) from conn_zorder_only_ip where src_ip like '157%' and dst_ip like '216.%';
select count(*) from conn_zorder where src_ip like '157%' and dst_ip like '216.%';
```
## Benchmark result
We have done two performance tests: one is to compare the efficiency of Z-order Optimize and Order by Sort,
and the other is to query based on the optimized Z-order by data and Random data
### Efficiency of Z-order Optimize and Order-by Sort
**10 billion data and 1000 files and Query resource:200 core 600G memory**
z-order by or order by only ip
| Table | row count | optimize time |
| ------------------- | -------------- | ------------------ |
| conn_order_only_ip | 10,000,000,000 | 1591.99 s |
| conn_zorder_only_ip | 10,000,000,000 | 8371.405 s |
z-order by or order by all columns
| Table | row count | optimize time |
| ------------------- | -------------- | ------------------ |
| conn_order | 10,000,000,000 | 1515.298 s |
| conn_zorder | 10,000,000,000 | 11057.194 s |
### Z-order by benchmark result
by querying the tables before and after optimization, we find that
**10 billion data and 200 files and Query resource:200 core 600G memory**
| Table | Average File Size | Scan row count | Average query time | row count Skipping ratio |
| ------------------- | ----------------- | -------------- | ------------------ | ------------------------ |
| conn_random_parquet | 1.2 G | 10,000,000,000 | 27.554 s | 0.0% |
| conn_zorder_only_ip | 890 M | 43,170,600 | 2.459 s | 99.568% |
| conn_zorder | 890 M | 54,841,302 | 3.185 s | 99.451% |
**10 billion data and 1000 files and Query resource:200 core 600G memory**
| Table | Average File Size | Scan row count | Average query time | row count Skipping ratio |
| ------------------- | ----------------- | -------------- | ------------------ | ------------------------ |
| conn_random_parquet | 234.8 M | 10,000,000,000 | 27.031 s | 0.0% |
| conn_zorder_only_ip | 173.9 M | 53,499,068 | 3.120 s | 99.465% |
| conn_zorder | 174.0 M | 35,910,500 | 3.103 s | 99.640% |
**1 billion data and 10000 files and Query resource:10 core 40G memory**
| Table | Average File Size | Scan row count | Average query time | row count Skipping ratio |
| ------------------- | ----------------- | -------------- | ------------------ | ------------------------ |
| conn_random_parquet | 2.7 M | 1,000,000,000 | 76.772 s | 0.0% |
| conn_zorder_only_ip | 2.1 M | 406,572 | 3.963 s | 99.959% |
| conn_zorder | 2.2 M | 387,942 | 3.621s | 99.961% |