Commit Graph

98 Commits

Author SHA1 Message Date
Bogdan Raducanu
4e7a2363b9 Support for TPC-H benchmark
Refactored TPC-DS code to be able to reuse it for TPC-H.
Added TPC-H queries texts adapted for Spark.
2017-08-09 12:26:32 +02:00
Juliusz Sompolski
6488d74d23 tpcds_2_4: Add alias names to subqueries in FROM clause.
## What changes were proposed in this pull request?

Since SPARK-20690 and SPARK-20916 Spark requires all subqueries in FROM clause to have an alias name.

## How was this patch tested?

Tested on SF1.
2017-06-29 16:04:08 +02:00
Juliusz Sompolski
bff6b34f62 Tweaks and improvements (#106)
Data generation:
* Add an option to change Dates to Strings, and specify it in Tables object creator.
* Add discovering partitions to createExternalTables
* Add analyzeTables function that gathers statistics.

Benchmark execution:
* Perform collect() on Dataframe, so that it is recorded by SQL SparkUI.
2017-06-13 11:42:14 +02:00
Juliusz Sompolski
2ddd521ab5 ok, make it long only where really needed. 2017-05-26 10:36:40 +02:00
Juliusz Sompolski
1bca964a3d Correct types of keys 2017-05-25 17:12:47 +02:00
vlyubin
c0bd21c2ec Add ss_max 2017-05-16 10:29:00 +02:00
vlyubin
e5dc6f338f Updated queries 23 2017-05-15 17:30:20 +02:00
vlyubin
e8f85b0b0e Moved queries into a separate folder 2017-05-15 14:22:37 +02:00
vlyubin
96bf10bffc Add tpcds 2.4 queries 2017-05-12 11:54:32 +02:00
Eric Liang
64728c7cff Add option to avoid cleaning after each run, to enable parallel runs 2017-03-14 19:45:27 -07:00
Timothy Hunter
53091a1935 Removes labels from tree data generation (#82)
* changes

* removes labels

* reset scala version

* adding metadata

* bumping spark release
2016-12-13 16:47:31 -08:00
srinathshankar
0eaa4b1d57 [SC-4409] Correct query 41 in TPCDS kit (#90) 2016-09-30 18:02:39 -07:00
Timothy Hunter
948c8369e7 Fixes issues with scala 2.11
Updates the usual scala-logging issues to make the source code cross-compilable between scala 2.10 and scala 2.11.

Tests:
A scala 2.11 version of the code has been run against the official Spark 2.0.0 RC4 binary release (Scala 2.11)
A scala 2.10 version has been run against the official Spark 1.6.2 release

Author: Timothy Hunter <timhunter@databricks.com>

Closes #81 from thunterdb/1607-scala211.
2016-07-19 11:19:52 -07:00
Joseph K. Bradley
51469a34d6 Fixed tree, forest, GBT tests by adding metadata to DataFrames 2016-07-11 10:33:19 -07:00
Timothy Hunter
c7d42d3626 adding parameters 2016-07-06 11:23:07 -07:00
Timothy Hunter
2672bcd5b7 ALS algorithm for spark-sql-perf
This has been tested locally with a small amount of data.

I have not bothered to reimplement a more robust version of the ALS synthetic data generation, so it will still require some manual parameter tweaking as before.

Author: Timothy Hunter <timhunter@databricks.com>

Closes #76 from thunterdb/1607-als.
2016-07-05 15:54:08 -07:00
Timothy Hunter
40e97ca3c0 comment 2016-07-05 15:01:50 -07:00
Timothy Hunter
ce7e20ae6d set the solver 2016-07-05 13:46:19 -07:00
Timothy Hunter
def20479a1 linear regression 2016-07-05 13:42:56 -07:00
Joseph K. Bradley
9d11a601c3 added kmeans test 2016-07-01 18:00:49 -07:00
Joseph K. Bradley
495e2716c4 updated per code review. works in local tests 2016-07-01 17:39:28 -07:00
Timothy Hunter
813bd8ad59 adding more experiments 2016-07-01 10:34:42 -07:00
Joseph K. Bradley
c15d083fe7 cleanups 2016-06-30 10:45:15 -07:00
Joseph K. Bradley
ecf2eedbb8 Added decision tree, forest, GBT tests 2016-06-30 10:38:24 -07:00
Joseph K. Bradley
33a1e55366 partly done adding decision tree tests 2016-06-29 17:06:27 -07:00
Timothy Hunter
353dc0c873 comment 2016-06-28 12:00:04 -07:00
Timothy Hunter
5c1990e4ff no normalization 2016-06-27 13:32:38 -07:00
Timothy Hunter
87dc42a466 work on glm, and some notbooks 2016-06-23 12:13:11 -07:00
Timothy Hunter
1388722b81 Initial commit for adding MLlib reporting in spark-sql-perf
This PR adds basic MLlib infrastructure to run some benchmarks against ML pipelines.

There are 2 ways to describe and run ML pipelines:
 - programatically, in scala (see MLBenchmarks.scala)
 - using a simple YAML file (see mllib-small.yaml for an example)
The YAML approach is preferred because it generates programmatically the cartesian product of all the experiments to run and validates the types of the objects in the yaml file.

In both cases, all the ML experiments are standard benchmarks.

This PR also moves some code in `Benchmark.scala` : the current code generates path-dependent structural signatures and confuses intellij.

It does not include tests, but some small benchmarks can be run locally against a spark 2 installation:

```
$SPARK_HOME/bin/spark-shell --jars $PWD/target/scala-2.10/spark-sql-perf-assembly-0.4.9-SNAPSHOT.jar
```
and then:

```scala
com.databricks.spark.sql.perf.mllib.MLLib.run(yamlFile="src/main/scala/configs/mllib-small.yaml")
```

Author: Timothy Hunter <timhunter@databricks.com>

Closes #69 from thunterdb/1605-mllib2.
2016-06-22 16:59:49 -07:00
Davies Liu
ea342c6165 fix checking results and bump to 0.4.9 2016-06-17 12:53:12 -07:00
Eric Liang
0d1e9043f1 [SC-3547] Fix various typos in queries and bump version to 0.48 2016-06-14 12:27:24 -07:00
Davies Liu
c087b68a5c make number of partitions configurable 2016-05-24 10:40:51 -07:00
Sameer Agarwal
1840fd9f21 Fix/rewrite some TPC-DS 1.4 queries
This patch ports upstream query modifications from apache/spark#13188
2016-05-23 14:02:47 -07:00
Sameer Agarwal
0355fc4ee7 Fix build and switch to jdk8
* Fix Build

* more memory

* switch to jdk8

* old memory settings
2016-05-23 12:54:07 -07:00
Sameer Agarwal
10b90c0d2b Fix q8 in ImpalaKit 2016-04-29 14:07:31 -07:00
Davies Liu
656f1bdb17 fix writing results 2016-03-30 11:56:55 -07:00
Michael Armbrust
5912673b0d Fix JoinPerformance compilation
Author: Michael Armbrust <michael@databricks.com>

Closes #55 from marmbrus/fixJoinPerf.
2016-03-25 11:46:36 -07:00
Josh Rosen
42a415e8d4 Extract Query class from Benchmark into its own top-level class and make SparkContext field transient
This patch extracts `Query` into its own top-level class and makes its `sparkContext` field transient in order to fix `NotSerializableException`s.

Author: Josh Rosen <rosenville@gmail.com>

Closes #53 from JoshRosen/make-query-into-top-level-class.
2016-02-22 18:23:06 -08:00
Josh Rosen
7e38b77c50 Update to compile against Spark 2.0.0-SNAPSHOT and bump version to 0.4.0-SNAPSHOT
Author: Josh Rosen <rosenville@gmail.com>

Closes #51 from JoshRosen/spark-2.0.0.
2016-02-19 13:02:29 -08:00
Josh Rosen
685ed9e488 Add TPCDS(sqlContext) constructor for backwards-compatibility
This patch adds additional constructors to `TPCDS` to maintain backwards-compatibility with code which calls `new TPCDS(anExistingSqlContext)`. This constructor was removed in #47.

The motivation for backwards-compatibility here is to simplify the gradual roll-out of an updated spark-sql-perf library to some existing jobs which share the same notebook.

Author: Josh Rosen <rosenville@gmail.com>

Closes #52 from JoshRosen/backwards-compatible-tpcds-constructor.
2016-02-19 13:01:23 -08:00
Michael Armbrust
9d3347e949 Improvements to running the benchmark
- Scripts for running the benchmark either while working on spark-sql-perf (bin/run) or while working on Spark (bin/spark-perf).  The latter uses Spark's sbt build to compile spark and downloads the most recent published version of spark-sql-perf.
 - Adds a `--compare` that can be used to compare the results with a baseline run

Author: Michael Armbrust <michael@databricks.com>

Closes #49 from marmbrus/runner.
2016-01-24 20:24:54 -08:00
Michael Armbrust
663ca7560e Main Class for running Benchmarks from the command line
This PR adds the ability to run performance test locally as a stand alone program that reports the results to the console:

```
$ bin/run --help
spark-sql-perf 0.2.0
Usage: spark-sql-perf [options]

  -b <value> | --benchmark <value>
        the name of the benchmark to run
  -f <value> | --filter <value>
        a filter on the name of the queries to run
  -i <value> | --iterations <value>
        the number of iterations to run
  --help
        prints this usage text

$ bin/run --benchmark DatasetPerformance
```

Author: Michael Armbrust <michael@databricks.com>

Closes #47 from marmbrus/MainClass.
2016-01-19 12:37:51 -08:00
Davies Liu
cec648ac0f try to run all TPCDS queries in benchmark (even can't be parsed) 2016-01-08 15:03:44 -08:00
Davies Liu
3105219fb0 Merge commit '11d1f9dd7237ea2a09ecfa61f09d7623ad52fd47' 2016-01-08 11:29:07 -08:00
Davies Liu
11d1f9dd72 update some queries:
" -> `
   fill some values
2016-01-08 11:27:50 -08:00
Michael Armbrust
9269f8f594 Capture BuildInfo when available
Author: Michael Armbrust <michael@databricks.com>

Closes #45 from marmbrus/buildInfo.
2015-12-23 11:03:06 -08:00
Michael Armbrust
f8aa93d968 Initial set of tests for Datasets
Author: Michael Armbrust <michael@databricks.com>

Closes #42 from marmbrus/dataset-tests.
2015-12-08 16:04:42 -08:00
Michael Armbrust
0aa2569a18 Write only one file per run
Author: Michael Armbrust <michael@databricks.com>

Closes #35 from marmbrus/oneResultFile.
2015-12-08 15:46:20 -08:00
Yin Huai
3af656defa Make ExecutionMode.HashResults handle null value
In Spark 1.6, if a value is null, `getLong` will throw an exception. Before 1.6, it will return 0. With this PR, we will check if the result is null. If it is null, null will be returned instead of 0.

Author: Yin Huai <yhuai@databricks.com>

Closes #41 from yhuai/fixSumHash.
2015-12-08 15:28:48 -08:00
Nong Li
43c2f23bb9 Fixes for Q34 and Q73 to return results deterministically.
Author: Nong Li <nong@databricks.com>

Closes #38 from nongli/tpcds.
2015-11-25 15:03:33 -08:00