[KYUUBI #4749] Fix flaky test issues in SchedulerPoolSuite
### _Why are the changes needed?_ To fix issue https://github.com/apache/kyuubi/issues/4713, a PR https://github.com/apache/kyuubi/pull/4714 was submitted, but it had Flaky test issues. After 50 local tests, it succeeded 38 times and failed 12 times. This PR addresses the issue of flaky tests. ### _How was this patch tested?_ - [ ] Add some test cases that check the changes thoroughly including negative and positive cases if possible - [ ] Add screenshots for manual tests if appropriate - [x] [Run test](https://kyuubi.readthedocs.io/en/master/develop_tools/testing.html#running-tests) locally before make a pull request Closes #4749 from huangzhir/fixtest-schedulerpool. Closes #4749 2d2e14069 [huangzhir] call KyuubiSparkContextHelper.waitListenerBus() to make sure there are no more events in the spark event queue 52a34d287 [fwang12] [KYUUBI #4746] Do not recreate async request executor if has been shutdown d4558ea82 [huangzhir] Merge branch 'master' into fixtest-schedulerpool 44c4cefff [huangzhir] make sure the SparkListener has received the finished events for job1 and job2. 8a753e924 [huangzhir] make sure job1 started before job2 e66ede214 [huangzhir] fixbug TEST SchedulerPoolSuite a false positive result Lead-authored-by: huangzhir <306824224@qq.com> Co-authored-by: fwang12 <fwang12@ebay.com> Signed-off-by: Cheng Pan <chengpan@apache.org>
This commit is contained in:
parent
8d424ef435
commit
2c55a1fdaf
@ -21,6 +21,7 @@ import java.util.concurrent.Executors
|
||||
|
||||
import scala.concurrent.duration.SECONDS
|
||||
|
||||
import org.apache.spark.KyuubiSparkContextHelper
|
||||
import org.apache.spark.scheduler.{SparkListener, SparkListenerJobEnd, SparkListenerJobStart}
|
||||
import org.scalatest.concurrent.PatienceConfiguration.Timeout
|
||||
import org.scalatest.time.SpanSugar.convertIntToGrainOfTime
|
||||
@ -101,6 +102,8 @@ class SchedulerPoolSuite extends WithSparkSQLEngine with HiveJDBCTestHelper {
|
||||
})
|
||||
threads.shutdown()
|
||||
threads.awaitTermination(20, SECONDS)
|
||||
// make sure the SparkListener has received the finished events for job1 and job2.
|
||||
KyuubiSparkContextHelper.waitListenerBus(spark)
|
||||
// job1 should be started before job2
|
||||
assert(job1StartTime < job2StartTime)
|
||||
// job2 minShare is 2(total resource) so that job1 should be allocated tasks after
|
||||
|
||||
Loading…
Reference in New Issue
Block a user