### _Why are the changes needed?_ - to consolidate styles in markdown files from manual written or auto-generated - apply markdown formatting rules with flexmark from [spotless-maven-plugin](https://github.com/diffplug/spotless/tree/main/plugin-maven#markdown) to *.md files in `/docs` - use `flexmark` to format markdown generation in `TestUtils` of common module used by `AllKyuubiConfiguration` and `KyuubiDefinedFunctionSuite`, as the same way in `FlexmarkFormatterFunc ` of `spotless-maven-plugin` using with `COMMONMARK` as `FORMATTER_EMULATION_PROFILE` (https://github.com/diffplug/spotless/blob/maven/2.30.0/lib/src/flexmark/java/com/diffplug/spotless/glue/markdown/FlexmarkFormatterFunc.java) - using `flexmark` of` 0.62.2`, as the last version requiring Java 8+ (checked from pom file and bytecode version) ``` <markdown> <includes> <include>docs/**/*.md</include> </includes> <flexmark></flexmark> </markdown> ``` - Changes applied to markdown doc files, - no style change or breakings in built docs by `make html` - removal all the first blank in licences and comments to conform markdown style rules - tables regenerated by flexmark following as in [GitHub Flavored Markdown](https://help.github.com/articles/organizing-information-with-tables/) (https://github.com/vsch/flexmark-java/wiki/Extensions#tables) ### _How was this patch tested?_ - [x] regenerate docs using `make html` successfully and check all the markdown pages available - [x] regenerate `settings.md` and `functions.md` by `AllKyuubiConfiguration` and `KyuubiDefinedFunctionSuite`, and pass the checks by both themselves and spotless check via `dev/reformat` - [x] [Run test](https://kyuubi.readthedocs.io/en/master/develop_tools/testing.html#running-tests) locally before make a pull request Closes #4200 from bowenliang123/markdown-formatting. Closes #4200 1eeafce4 [liangbowen] revert minor changes in AllKyuubiConfiguration 4f892857 [liangbowen] use flexmark in markdown doc generation 8c978abd [liangbowen] changes on markdown files a9190556 [liangbowen] apply markdown formatting rules with `spotless-maven-plugin` to markdown files with in `/docs` Authored-by: liangbowen <liangbowen@gf.com.cn> Signed-off-by: liangbowen <liangbowen@gf.com.cn>
19 KiB
19 KiB
The TTL Of Kyuubi Engines
For a multi-tenant cluster, its overall resource utilization is a KPI that measures how effectively its resource is utilized against its availability or capacity. To better improve the overall resource utilization of the cluster,
- At cluster layer, we leverage the capabilities, such as Capacity Scheduler, of resource scheduling management services, such as YARN and K8s.
- At application layer, we'd be better to acquire and release resources according to the real workloads.
The Big Contributors Of Resource Waste
- The time to wait for the resource to be allocated, such as the scheduling delay, the start/stop cost.
- A longer time-to-live(TTL) for allocated resources can significantly reduce such time costs within an application.
- The time being idle of the resource.
- A shorter time to live for allocated resources can make all resources in rapid turnarounds across applications.
TTL Types In Kyuubi Engines
- Engine TTL
- The TTL of engines describes how long an engine will be cached after all sessions are disconnected.
- Executor TTL
- The TTL of the executor describes how long an executor will be cached when no tasks come.
Configurations
Engine TTL
| Key | Default | Meaning | Type | Since |
|---|---|---|---|---|
| kyuubi.session.engine .check.interval |
PT5M |
The check interval for engine timeout |
duration |
1.0.0 |
| kyuubi.session.engine .idle.timeout |
PT30M |
engine timeout, the engine will self-terminate when it's not accessed for this duration. 0 or negative means not to self-terminate. |
duration |
1.0.0 |
The above two configurations can be used together to set the TTL of engines. These configurations are user-facing and able to use in JDBC urls. Note that, for connection share level engines that will be terminated at once when the connection is disconnected, these configurations not necessarily work in this case.
Executor TTL
Executor TTL is part of functionality of Apache Spark's Dynamic Resource Allocation.