Compare commits

...

34 Commits

Author SHA1 Message Date
Kubernetes Prow Robot
bc4fd671bf
Merge pull request #2348 from yliaog/automated-release-of-32.0.1-upstream-release-32.0-1739569262
Some checks failed
Kubernetes Python Client - Validation / build (3.10) (push) Has been cancelled
Kubernetes Python Client - Validation / build (3.11) (push) Has been cancelled
Kubernetes Python Client - Validation / build (3.12) (push) Has been cancelled
Kubernetes Python Client - Validation / build (3.8) (push) Has been cancelled
Kubernetes Python Client - Validation / build (3.9, coverage) (push) Has been cancelled
Automated release of 32.0.1 upstream release 32.0 1739569262
2025-02-18 12:10:26 -08:00
yliao
79690bcfa4 Update the compatibility matrix and maintenance status 2025-02-14 21:43:52 +00:00
yliao
893d70aa5f generated client change 2025-02-14 21:42:23 +00:00
yliao
641f59a7e4 generated API change 2025-02-14 21:42:23 +00:00
yliao
7ea787c17b generated client change for custom_objects 2025-02-14 21:42:22 +00:00
yliao
49df237083 update version constants for 32.0.1 release 2025-02-14 21:42:22 +00:00
yliao
3c848c277b update changelog with release notes from master branch 2025-02-14 21:42:22 +00:00
Tomas Aschan
dcc27f964e Address review feedback 2025-02-14 21:42:22 +00:00
Tomas Aschan
c665cab8e4 fix: Extract value from ConfigNode before storing it 2025-02-14 21:42:22 +00:00
Tomas Aschan
4c7757b1a7 Tweak test to fail like the production code does 2025-02-14 21:42:22 +00:00
h-ema-r
ca32d9a66c Add introduction to Kubernetes patch types 2025-02-14 21:42:22 +00:00
Akhil Lawrence
a0d4580529 mark shell=False in ExecProvider for linux/darwin platforms 2025-02-14 21:42:22 +00:00
Rene Kschamer
e06dc4158e add test_parse_quantity 2025-02-14 21:42:22 +00:00
Rene Kschamer
0ea3c06a7a adding format_quantity tests 2025-02-14 21:42:22 +00:00
Rene Kschamer
ff49ce9a32 Adding utils.format_quantity 2025-02-14 21:42:22 +00:00
Kubernetes Prow Robot
8980f12ff5
Merge pull request #2331 from yliaog/automated-release-of-32.0.0-upstream-release-32.0-1737659562
Automated release of 32.0.0 upstream release 32.0 1737659562
2025-01-23 11:31:21 -08:00
yliao
8db4744ce4 updated compatibility matrix and maintenance status 2025-01-23 19:21:27 +00:00
yliao
b9091e3f84 generated client change 2025-01-23 19:14:15 +00:00
yliao
dbedb4d920 update version constants for 32.0.0 release 2025-01-23 19:14:15 +00:00
yliao
0d9f4f87fc update changelog with release notes from master branch 2025-01-23 19:14:15 +00:00
Pete
2af3cf360d Close the Python sockets when the Websocket closes
This allows the client to detect when the connection has been interrupted
2025-01-23 19:14:15 +00:00
Kubernetes Prow Robot
96f3758c18
Merge pull request #2323 from yliaog/automated-release-of-32.0.0b1-upstream-release-32.0-1737065239
Automated release of 32.0.0b1 upstream release 32.0 1737065239
2025-01-17 11:28:34 -08:00
yliao
fc8b2b6502 updated compatibility matrix and maintenance status 2025-01-16 22:12:07 +00:00
yliao
b42b60c688 generated client change 2025-01-16 22:11:03 +00:00
yliao
43b6175463 update changelog 2025-01-16 22:11:03 +00:00
yliao
ccb2f88b88 update version constants for 32.0.0b1 release 2025-01-16 22:11:03 +00:00
dependabot[bot]
1103634eaa Bump helm/kind-action from 1.11.0 to 1.12.0
Bumps [helm/kind-action](https://github.com/helm/kind-action) from 1.11.0 to 1.12.0.
- [Release notes](https://github.com/helm/kind-action/releases)
- [Commits](https://github.com/helm/kind-action/compare/v1.11.0...v1.12.0)

---
updated-dependencies:
- dependency-name: helm/kind-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-01-16 22:11:03 +00:00
Kubernetes Prow Robot
ebdd99c859
Merge pull request #2318 from yliaog/automated-release-of-32.0.0a1-upstream-release-32.0-1736800223
Automated release of 32.0.0a1 upstream release 32.0 1736800223
2025-01-14 09:14:34 -08:00
yliao
ac25d7ccda replaced python 3.7 version, it is no longer supported by ubuntu 24.04.
The version '3.7' with architecture 'x64' was not found for Ubuntu 24.04. The list of all available versions can be found here: https://raw.githubusercontent.com/actions/python-versions/main/versions-manifest.json

fix
2025-01-14 01:35:24 +00:00
yliao
8b6125c4ee updated compatibility matrix and maintenance status. 2025-01-13 20:34:07 +00:00
yliao
d615a834cd generated client change 2025-01-13 20:30:58 +00:00
yliao
7cedb076c0 generated API change 2025-01-13 20:30:57 +00:00
yliao
412300b2e0 update changelog 2025-01-13 20:30:25 +00:00
yliao
e7d74047c1 update version constants for 32.0.0a1 release 2025-01-13 20:30:24 +00:00
32 changed files with 926 additions and 66 deletions

View File

@ -13,13 +13,13 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.7, 3.8, 3.9]
python-version: ["3.8", "3.9", "3.10"]
steps:
- uses: actions/checkout@v4
with:
submodules: true
- name: Create Kind Cluster
uses: helm/kind-action@v1.11.0
uses: helm/kind-action@v1.12.0
with:
cluster_name: kubernetes-python-e2e-master-${{ matrix.python-version }}
# The kind version to be used to spin the cluster up

View File

@ -19,7 +19,7 @@ jobs:
with:
submodules: true
- name: Create Kind Cluster
uses: helm/kind-action@v1.11.0
uses: helm/kind-action@v1.12.0
with:
cluster_name: kubernetes-python-e2e-release-11.0-${{ matrix.python-version }}
# The kind version to be used to spin the cluster up

View File

@ -19,7 +19,7 @@ jobs:
with:
submodules: true
- name: Create Kind Cluster
uses: helm/kind-action@v1.11.0
uses: helm/kind-action@v1.12.0
with:
cluster_name: kubernetes-python-e2e-release-12.0-${{ matrix.python-version }}
# The kind version to be used to spin the cluster up

View File

@ -19,7 +19,7 @@ jobs:
with:
submodules: true
- name: Create Kind Cluster
uses: helm/kind-action@v1.11.0
uses: helm/kind-action@v1.12.0
with:
cluster_name: kubernetes-python-e2e-release-17.0-${{ matrix.python-version }}
# The kind version to be used to spin the cluster up

View File

@ -19,7 +19,7 @@ jobs:
with:
submodules: true
- name: Create Kind Cluster
uses: helm/kind-action@v1.11.0
uses: helm/kind-action@v1.12.0
with:
cluster_name: kubernetes-python-e2e-release-18.0-${{ matrix.python-version }}
# The kind version to be used to spin the cluster up

View File

@ -19,7 +19,7 @@ jobs:
with:
submodules: true
- name: Create Kind Cluster
uses: helm/kind-action@v1.11.0
uses: helm/kind-action@v1.12.0
with:
cluster_name: kubernetes-python-e2e-release-26.0-${{ matrix.python-version }}
# The kind version to be used to spin the cluster up

View File

@ -7,7 +7,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.7", "3.8", "3.10", "3.11"]
python-version: ["3.8", "3.10", "3.11", "3.12"]
include:
- python-version: "3.9"
use_coverage: 'coverage'

View File

@ -1,4 +1,38 @@
# v32.0.0+snapshot
# v32.0.1
Kubernetes API Version: v1.32.2
### Uncategorized
- Adds support for providing cluster information to the exec credential provider if requested. (#2303, @brendandburns)
- Remove py from test dependencies (#2288, @jelly)
### Bug or Regression
- Fix dynamic client watch of named resource (#2076, @bobh66)
- Fixed PortForward proxy to close local Python sockets when the WebSocket closes. (#2316, @anvilpete)
- Fixes bug that would fail authentication when using the exec-provider with a specific cluster selected (#2340, @tomasaschan)
### Feature
- Add utility functions kubernetes.utils.duration.parse_duration and kubernetes.utils.duration.format_duration to manage Gateway API Duration strings as specified by GEP-2257. (#2261, @kflynn)
- Added the ability to use the optional `apply` parameter for functions within the `utils.create_from_yaml` submodule. This allows these functions to optionally use the `DynamicClient.server_side_apply` function to apply yaml manifests. (#2252, @dcmcand)
- Adding `utils.format_quantity` to convert decimal numbers into a canonical Kubernetes quantity. (#2216, @rkschamer)
# v32.0.0
Kubernetes API Version: v1.32.1
### Bug or Regression
- Fixed PortForward proxy to close local Python sockets when the WebSocket closes. (#2316, @anvilpete)
# v32.0.0b1
Kubernetes API Version: v1.32.1
### API Change
- DRA API: the maximum number of pods which can use the same ResourceClaim is now 256 instead of 32. Beware that downgrading a cluster where this relaxed limit is in use to Kubernetes 1.32.0 is not supported because 1.32.0 would refuse to update ResourceClaims with more than 32 entries in the status.reservedFor field. ([kubernetes/kubernetes#129544](https://github.com/kubernetes/kubernetes/pull/129544), [@pohly](https://github.com/pohly)) [SIG API Machinery, Node and Testing]
- NONE ([kubernetes/kubernetes#129598](https://github.com/kubernetes/kubernetes/pull/129598), [@aravindhp](https://github.com/aravindhp)) [SIG API Machinery and Node]
# v32.0.0a1
Kubernetes API Version: v1.32.0

View File

@ -101,6 +101,7 @@ supported versions of Kubernetes clusters.
- [client 29.y.z](https://pypi.org/project/kubernetes/29.0.0/): Kubernetes 1.28 or below (+-), Kubernetes 1.29 (✓), Kubernetes 1.30 or above (+-)
- [client 30.y.z](https://pypi.org/project/kubernetes/30.1.0/): Kubernetes 1.29 or below (+-), Kubernetes 1.30 (✓), Kubernetes 1.31 or above (+-)
- [client 31.y.z](https://pypi.org/project/kubernetes/31.0.0/): Kubernetes 1.30 or below (+-), Kubernetes 1.31 (✓), Kubernetes 1.32 or above (+-)
- [client 32.y.z](https://pypi.org/project/kubernetes/32.0.1/): Kubernetes 1.31 or below (+-), Kubernetes 1.32 (✓), Kubernetes 1.33 or above (+-)
> See [here](#homogenizing-the-kubernetes-python-client-versions) for an explanation of why there is no v13-v16 release.
@ -163,11 +164,13 @@ between client-python versions.
| 28.0 Alpha/Beta | Kubernetes main repo, 1.28 branch | ✗ |
| 28.0 | Kubernetes main repo, 1.28 branch | ✗ |
| 29.0 Alpha/Beta | Kubernetes main repo, 1.29 branch | ✗ |
| 29.0 | Kubernetes main repo, 1.29 branch | |
| 29.0 | Kubernetes main repo, 1.29 branch | |
| 30.0 Alpha/Beta | Kubernetes main repo, 1.30 branch | ✗ |
| 30.0 | Kubernetes main repo, 1.30 branch | ✓ |
| 31.0 Alpha/Beta | Kubernetes main repo, 1.31 branch | ✗ |
| 31.0 | Kubernetes main repo, 1.31 branch | ✓ |
| 32.0 Alpha/Beta | Kubernetes main repo, 1.32 branch | ✗ |
| 32.1 | Kubernetes main repo, 1.32 branch | ✓ |
> See [here](#homogenizing-the-kubernetes-python-client-versions) for an explanation of why there is no v13-v16 release.

315
devel/patch_types.md Normal file
View File

@ -0,0 +1,315 @@
# Introduction to Kubernetes Patch Types
In Kubernetes, patches are a way to make updates or changes to resources (like Pods, ConfigMaps, Deployments, etc.) without having to replace the entire resource. Patches allow you to modify specific parts of a resource while leaving the rest unchanged.
## Types of Kubernetes Patches
There are several types of patches that Kubernetes supports:
1. JSON Merge Patch (Standard JSON Patch)
2. Strategic Merge Patch
3. JSON Patch
4. Apply Patch (Server-Side Apply)
## 1. JSON Merge Patch
- JSON Merge Patch is based on the concept of merging JSON objects. When you apply a patch, you only need to specify the changes you want to make. Kubernetes will take your partial update and merge it with the existing resource.
- This patch type is simple and works well when you need to update fields, such as changing a value or adding a new key.
### Example Scenario:
Imagine you have a Kubernetes Pod resource that looks like this:
```
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: nginx
image: nginx:1.14
- name: redis
image: redis:5
```
Now, you want to change the image of the nginx container from nginx:1.14 to nginx:1.16. Instead of sending the entire resource, you can send only the part you want to change, like this:
```
{
"spec": {
"containers": [
{
"name": "nginx",
"image": "nginx:1.16"
}
]
}
}
```
When you send this patch to Kubernetes:
It will replace the image of the nginx container with the new one (nginx:1.16).
It will leave the redis container unchanged, because it's not included in the patch.
### Example Code (Python):
```
from kubernetes import client, config
def main():
config.load_kube_config()
v1 = client.CoreV1Api()
namespace = "default"
name = "mypod"
patch = {
"spec": {
"containers": [
{
"name": "nginx",
"image": "nginx:1.16"
}
]
}
}
v1.patch_namespaced_pod(name=name, namespace=namespace, body=patch, content_type="application/merge-patch+json")
if __name__ == "__main__":
main()
```
### After the JSON Merge patch
```
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: nginx
image: nginx:1.16 # Updated image version
- name: redis
image: redis:5 # Unchanged
```
## 2. Strategic Merge Patch
Strategic Merge Patch is another type of patching mechanism, mostly used in Kubernetes, that allows updates to objects in a way that is more aware of the structure and semantics of the resource being modified. It is strategic because it understands the structure of the object, rather than blindly replacing it, and applies the changes in a smart way.
- The patch itself is typically a JSON or YAML object, which contains the fields to be updated
- **Adds New Fields:** You can use it to add new fields or modify existing ones without affecting the rest of the object.
- **Handle Lists or Arrays:** When dealing with lists (e.g., arrays or dictionaries), Strategic Merge Patch handles merging and updates intelligently.
### Example of Strategic Merge Patch:
let's suppose we have a yaml file Target Resource (Kubernetes ConfigMap):
```
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
key1: value1
key2: value2
list1:
- item1
- item2
```
Strategic Merge Patch
```
data:
key1: updated_value1 # Update key1
key3: value3 # Add new key3
list1:
- item1
- item2
- item3 # Add item3 to list1
```
Result after Strategic Merge Patch
```
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
key1: updated_value1 # key1 is updated
key2: value2 # key2 is unchanged
key3: value3 # key3 is added
list1:
- item1
- item2
- item3 # item3 is added to list1
```
## 3. JSON Patch
- JSON Patch is a standard format that specifies a way to apply updates to a JSON document. Instead of sending a new or merged version of the object, JSON Patch describes how to modify the object step-by-step.
- Operation-Based: A JSON Patch is an array of operations that describe modifications to a target JSON object.
- Ideal when you need to perform multiple, specific operations on resource fields (e.g., replacing a value, adding new fields, or deleting specific values).
### Patch Structure:
A JSON Patch is an array of operations. Each operation is an object with:
• op: The operation type (e.g., add, remove, replace, etc.).
• path: A JSON Pointer string (defined in RFC 6901) that specifies the location in the document to apply the operation.
• value: (Optional) The new value to apply (used with operations like add, replace, or test).
• from: (Optional) Used in operations like move and copy to specify the source path.
### Supported Operations for JSON Patch
#### 1. **add**
- Adds a value to a specified path.
- If the path already exists, it adds the value to a list or object.
Example:
```json
{ "op": "add", "path": "/a/b/c", "value": "foo" }
```
#### 2. **remove**
- Removes the value at the specified path.
Example:
```json
{ "op": "remove", "path": "/a/b/c" }
```
#### 3. **replace**
- Replaces the value at the specified path.
- Functionally similar to remove followed by add.
Example:
```json
{ "op": "replace", "path": "/a/b/c", "value": "bar" }
```
#### 4. **move**
- Moves a value from one path to another.
Example:
```json
{ "op": "move", "from": "/a/b/c", "path": "/x/y/z" }
```
#### 5. **copy**
- Copies a value from one path to another.
Example:
```json
{ "op": "copy", "from": "/a/b/c", "path": "/x/y/z" }
```
#### 6. **test**
- Tests whether a value at a specified path matches a given value.
- Used for validation in transactional updates.
Example:
```json
{ "op": "test", "path": "/a/b/c", "value": "foo" }
```
---
#### Example: Applying a JSON Patch
##### Target JSON Document:
```json
{
"a": {
"b": {
"c": "foo"
}
},
"x": {
"y": "bar"
}
}
```
##### JSON Patch:
```json
[
{ "op": "replace", "path": "/a/b/c", "value": "baz" },
{ "op": "add", "path": "/a/d", "value": ["new", "value"] },
{ "op": "remove", "path": "/x/y" }
]
```
##### Result:
```json
{
"a": {
"b": {
"c": "baz"
},
"d": ["new", "value"]
},
"x": {}
}
```
## 4. Apply Patch (Server-Side Apply)
Server-Side Apply is a feature in Kubernetes that allows you to declaratively update resources by specifying their desired state. It provides an intuitive and robust way to manage resources without having to manually modify every field. It tracks which fields belong to which manager, which helps prevent conflicts when multiple clients (such as different controllers or users) update the same resource.
Key Features:
- Declarative Management: You provide the desired final state, and Kubernetes ensures the actual state matches it.
- Conflict Detection: Ensures changes from different clients dont conflict with each other.
- Field Ownership: Kubernetes tracks which client or manager owns which fields of a resource.
##### Example Scenario:
You have a ConfigMap and want to update certain keys but leave others unchanged.
```
apiVersion: v1
kind: ConfigMap
metadata:
name: example-config
namespace: default
data:
key1: value1
key2: value2
```
**Goal:**
- You want to update key2 to new_value2 and
- add a new key key3 with a value value3.
- leave key1 unchanged
##### Apply Patch YAML(Desired State):
```
apiVersion: v1
kind: ConfigMap
metadata:
name: example-config
namespace: default
data:
key2: new_value2 # Update existing key
key3: value3 # Add new key
```
##### Resulting ConfigMap (after apply):
```
apiVersion: v1
kind: ConfigMap
metadata:
name: example-config
namespace: default
data:
key1: value1 # Remains unchanged
key2: new_value2 # Updated value
key3: value3 # New key added
```

315
doc/patch_types.md Normal file
View File

@ -0,0 +1,315 @@
# Introduction to Kubernetes Patch Types
In Kubernetes, patches are a way to make updates or changes to resources (like Pods, ConfigMaps, Deployments, etc.) without having to replace the entire resource. Patches allow you to modify specific parts of a resource while leaving the rest unchanged.
## Types of Kubernetes Patches
There are several types of patches that Kubernetes supports:
1. JSON Merge Patch (Standard JSON Patch)
2. Strategic Merge Patch
3. JSON Patch
4. Apply Patch (Server-Side Apply)
## 1. JSON Merge Patch
- JSON Merge Patch is based on the concept of merging JSON objects. When you apply a patch, you only need to specify the changes you want to make. Kubernetes will take your partial update and merge it with the existing resource.
- This patch type is simple and works well when you need to update fields, such as changing a value or adding a new key.
### Example Scenario:
Imagine you have a Kubernetes Pod resource that looks like this:
```
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: nginx
image: nginx:1.14
- name: redis
image: redis:5
```
Now, you want to change the image of the nginx container from nginx:1.14 to nginx:1.16. Instead of sending the entire resource, you can send only the part you want to change, like this:
```
{
"spec": {
"containers": [
{
"name": "nginx",
"image": "nginx:1.16"
}
]
}
}
```
When you send this patch to Kubernetes:
It will replace the image of the nginx container with the new one (nginx:1.16).
It will leave the redis container unchanged, because it's not included in the patch.
### Example Code (Python):
```
from kubernetes import client, config
def main():
config.load_kube_config()
v1 = client.CoreV1Api()
namespace = "default"
name = "mypod"
patch = {
"spec": {
"containers": [
{
"name": "nginx",
"image": "nginx:1.16"
}
]
}
}
v1.patch_namespaced_pod(name=name, namespace=namespace, body=patch, content_type="application/merge-patch+json")
if __name__ == "__main__":
main()
```
### After the JSON Merge patch
```
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: nginx
image: nginx:1.16 # Updated image version
- name: redis
image: redis:5 # Unchanged
```
## 2. Strategic Merge Patch
Strategic Merge Patch is another type of patching mechanism, mostly used in Kubernetes, that allows updates to objects in a way that is more aware of the structure and semantics of the resource being modified. It is strategic because it understands the structure of the object, rather than blindly replacing it, and applies the changes in a smart way.
- The patch itself is typically a JSON or YAML object, which contains the fields to be updated
- **Adds New Fields:** You can use it to add new fields or modify existing ones without affecting the rest of the object.
- **Handle Lists or Arrays:** When dealing with lists (e.g., arrays or dictionaries), Strategic Merge Patch handles merging and updates intelligently.
### Example of Strategic Merge Patch:
let's suppose we have a yaml file Target Resource (Kubernetes ConfigMap):
```
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
key1: value1
key2: value2
list1:
- item1
- item2
```
Strategic Merge Patch
```
data:
key1: updated_value1 # Update key1
key3: value3 # Add new key3
list1:
- item1
- item2
- item3 # Add item3 to list1
```
Result after Strategic Merge Patch
```
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
key1: updated_value1 # key1 is updated
key2: value2 # key2 is unchanged
key3: value3 # key3 is added
list1:
- item1
- item2
- item3 # item3 is added to list1
```
## 3. JSON Patch
- JSON Patch is a standard format that specifies a way to apply updates to a JSON document. Instead of sending a new or merged version of the object, JSON Patch describes how to modify the object step-by-step.
- Operation-Based: A JSON Patch is an array of operations that describe modifications to a target JSON object.
- Ideal when you need to perform multiple, specific operations on resource fields (e.g., replacing a value, adding new fields, or deleting specific values).
### Patch Structure:
A JSON Patch is an array of operations. Each operation is an object with:
• op: The operation type (e.g., add, remove, replace, etc.).
• path: A JSON Pointer string (defined in RFC 6901) that specifies the location in the document to apply the operation.
• value: (Optional) The new value to apply (used with operations like add, replace, or test).
• from: (Optional) Used in operations like move and copy to specify the source path.
### Supported Operations for JSON Patch
#### 1. **add**
- Adds a value to a specified path.
- If the path already exists, it adds the value to a list or object.
Example:
```json
{ "op": "add", "path": "/a/b/c", "value": "foo" }
```
#### 2. **remove**
- Removes the value at the specified path.
Example:
```json
{ "op": "remove", "path": "/a/b/c" }
```
#### 3. **replace**
- Replaces the value at the specified path.
- Functionally similar to remove followed by add.
Example:
```json
{ "op": "replace", "path": "/a/b/c", "value": "bar" }
```
#### 4. **move**
- Moves a value from one path to another.
Example:
```json
{ "op": "move", "from": "/a/b/c", "path": "/x/y/z" }
```
#### 5. **copy**
- Copies a value from one path to another.
Example:
```json
{ "op": "copy", "from": "/a/b/c", "path": "/x/y/z" }
```
#### 6. **test**
- Tests whether a value at a specified path matches a given value.
- Used for validation in transactional updates.
Example:
```json
{ "op": "test", "path": "/a/b/c", "value": "foo" }
```
---
#### Example: Applying a JSON Patch
##### Target JSON Document:
```json
{
"a": {
"b": {
"c": "foo"
}
},
"x": {
"y": "bar"
}
}
```
##### JSON Patch:
```json
[
{ "op": "replace", "path": "/a/b/c", "value": "baz" },
{ "op": "add", "path": "/a/d", "value": ["new", "value"] },
{ "op": "remove", "path": "/x/y" }
]
```
##### Result:
```json
{
"a": {
"b": {
"c": "baz"
},
"d": ["new", "value"]
},
"x": {}
}
```
## 4. Apply Patch (Server-Side Apply)
Server-Side Apply is a feature in Kubernetes that allows you to declaratively update resources by specifying their desired state. It provides an intuitive and robust way to manage resources without having to manually modify every field. It tracks which fields belong to which manager, which helps prevent conflicts when multiple clients (such as different controllers or users) update the same resource.
Key Features:
- Declarative Management: You provide the desired final state, and Kubernetes ensures the actual state matches it.
- Conflict Detection: Ensures changes from different clients dont conflict with each other.
- Field Ownership: Kubernetes tracks which client or manager owns which fields of a resource.
##### Example Scenario:
You have a ConfigMap and want to update certain keys but leave others unchanged.
```
apiVersion: v1
kind: ConfigMap
metadata:
name: example-config
namespace: default
data:
key1: value1
key2: value2
```
**Goal:**
- You want to update key2 to new_value2 and
- add a new key key3 with a value value3.
- leave key1 unchanged
##### Apply Patch YAML(Desired State):
```
apiVersion: v1
kind: ConfigMap
metadata:
name: example-config
namespace: default
data:
key2: new_value2 # Update existing key
key3: value3 # Add new key
```
##### Resulting ConfigMap (after apply):
```
apiVersion: v1
kind: ConfigMap
metadata:
name: example-config
namespace: default
data:
key1: value1 # Remains unchanged
key2: new_value2 # Updated value
key3: value3 # New key added
```

View File

@ -1 +1 @@
5f773c685cb5e7b97c1b3be4a7cff387a8077a4789c738dac715ba91b1c50eda
b8cfb7a44bc989e127fbe1964c22b7da75c915c0e43925c6ac5c3592254696c5

View File

@ -4,7 +4,7 @@ No description provided (generated by Openapi Generator https://github.com/opena
This Python package is automatically generated by the [OpenAPI Generator](https://openapi-generator.tech) project:
- API version: release-1.32
- Package version: 32.0.0+snapshot
- Package version: 32.0.1
- Build package: org.openapitools.codegen.languages.PythonClientCodegen
## Requirements.
@ -560,7 +560,7 @@ Class | Method | HTTP request | Description
*CustomObjectsApi* | [**get_namespaced_custom_object_scale**](docs/CustomObjectsApi.md#get_namespaced_custom_object_scale) | **GET** /apis/{group}/{version}/namespaces/{namespace}/{plural}/{name}/scale |
*CustomObjectsApi* | [**get_namespaced_custom_object_status**](docs/CustomObjectsApi.md#get_namespaced_custom_object_status) | **GET** /apis/{group}/{version}/namespaces/{namespace}/{plural}/{name}/status |
*CustomObjectsApi* | [**list_cluster_custom_object**](docs/CustomObjectsApi.md#list_cluster_custom_object) | **GET** /apis/{group}/{version}/{plural} |
*CustomObjectsApi* | [**list_custom_object_for_all_namespaces**](docs/CustomObjectsApi.md#list_custom_object_for_all_namespaces) | **GET** /apis/{group}/{version}/{plural}# |
*CustomObjectsApi* | [**list_custom_object_for_all_namespaces**](docs/CustomObjectsApi.md#list_custom_object_for_all_namespaces) | **GET** /apis/{group}/{version}/{resource_plural} |
*CustomObjectsApi* | [**list_namespaced_custom_object**](docs/CustomObjectsApi.md#list_namespaced_custom_object) | **GET** /apis/{group}/{version}/namespaces/{namespace}/{plural} |
*CustomObjectsApi* | [**patch_cluster_custom_object**](docs/CustomObjectsApi.md#patch_cluster_custom_object) | **PATCH** /apis/{group}/{version}/{plural}/{name} |
*CustomObjectsApi* | [**patch_cluster_custom_object_scale**](docs/CustomObjectsApi.md#patch_cluster_custom_object_scale) | **PATCH** /apis/{group}/{version}/{plural}/{name}/scale |

View File

@ -14,7 +14,7 @@
__project__ = 'kubernetes'
# The version is auto-updated. Please do not edit.
__version__ = "32.0.0+snapshot"
__version__ = "32.0.1"
from . import client
from . import config

View File

@ -58,6 +58,15 @@ class ExecProvider(object):
else:
self.cluster = None
self.cwd = cwd or None
@property
def shell(self):
# for windows systems `shell` should be `True`
# for other systems like linux or darwin `shell` should be `False`
# referenes:
# https://github.com/kubernetes-client/python/pull/2289
# https://docs.python.org/3/library/sys.html#sys.platform
return sys.platform in ("win32", "cygwin")
def run(self, previous_response=None):
is_interactive = hasattr(sys.stdout, 'isatty') and sys.stdout.isatty()
@ -71,7 +80,7 @@ class ExecProvider(object):
if previous_response:
kubernetes_exec_info['spec']['response'] = previous_response
if self.cluster:
kubernetes_exec_info['spec']['cluster'] = self.cluster
kubernetes_exec_info['spec']['cluster'] = self.cluster.value
self.env['KUBERNETES_EXEC_INFO'] = json.dumps(kubernetes_exec_info)
process = subprocess.Popen(
@ -82,7 +91,7 @@ class ExecProvider(object):
cwd=self.cwd,
env=self.env,
universal_newlines=True,
shell=True)
shell=self.shell)
(stdout, stderr) = process.communicate()
exit_code = process.wait()
if exit_code != 0:

View File

@ -175,7 +175,7 @@ class ExecProviderTest(unittest.TestCase):
instance = mock.return_value
instance.wait.return_value = 0
instance.communicate.return_value = (self.output_ok, '')
ep = ExecProvider(self.input_with_cluster, None, {'server': 'name.company.com'})
ep = ExecProvider(self.input_with_cluster, None, ConfigNode("cluster", {'server': 'name.company.com'}))
result = ep.run()
self.assertTrue(isinstance(result, dict))
self.assertTrue('token' in result)

View File

@ -30,7 +30,7 @@ import yaml
from six.moves.urllib.parse import urlencode, urlparse, urlunparse
from six import StringIO, BytesIO
from websocket import WebSocket, ABNF, enableTrace
from websocket import WebSocket, ABNF, enableTrace, WebSocketConnectionClosedException
from base64 import urlsafe_b64decode
from requests.utils import should_bypass_proxies
@ -379,7 +379,12 @@ class PortForward:
if sock == self.websocket:
pending = True
while pending:
opcode, frame = self.websocket.recv_data_frame(True)
try:
opcode, frame = self.websocket.recv_data_frame(True)
except WebSocketConnectionClosedException:
for port in self.local_ports.values():
port.python.close()
return
if opcode == ABNF.OPCODE_BINARY:
if not frame.data:
raise RuntimeError("Unexpected frame data size")

View File

@ -14,7 +14,7 @@
from __future__ import absolute_import
__version__ = "32.0.0+snapshot"
__version__ = "32.0.1"
# import apis into sdk package
from kubernetes.client.api.well_known_api import WellKnownApi

View File

@ -2234,19 +2234,19 @@ class CustomObjectsApi(object):
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def list_custom_object_for_all_namespaces(self, group, version, plural, **kwargs): # noqa: E501
def list_custom_object_for_all_namespaces(self, group, version, resource_plural, **kwargs): # noqa: E501
"""list_custom_object_for_all_namespaces # noqa: E501
list or watch namespace scoped custom objects # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_custom_object_for_all_namespaces(group, version, plural, async_req=True)
>>> thread = api.list_custom_object_for_all_namespaces(group, version, resource_plural, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str group: The custom resource's group name (required)
:param str version: The custom resource's version (required)
:param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required)
:param str resource_plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required)
:param str pretty: If 'true', then the output is pretty printed.
:param bool allow_watch_bookmarks: allowWatchBookmarks requests watch events with type \"BOOKMARK\". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. If the feature gate WatchBookmarks is not enabled in apiserver, this field is ignored.
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the next key, but from the latest snapshot, which is inconsistent from the previous list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the \"next key\". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
@ -2269,21 +2269,21 @@ class CustomObjectsApi(object):
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.list_custom_object_for_all_namespaces_with_http_info(group, version, plural, **kwargs) # noqa: E501
return self.list_custom_object_for_all_namespaces_with_http_info(group, version, resource_plural, **kwargs) # noqa: E501
def list_custom_object_for_all_namespaces_with_http_info(self, group, version, plural, **kwargs): # noqa: E501
def list_custom_object_for_all_namespaces_with_http_info(self, group, version, resource_plural, **kwargs): # noqa: E501
"""list_custom_object_for_all_namespaces # noqa: E501
list or watch namespace scoped custom objects # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_custom_object_for_all_namespaces_with_http_info(group, version, plural, async_req=True)
>>> thread = api.list_custom_object_for_all_namespaces_with_http_info(group, version, resource_plural, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str group: The custom resource's group name (required)
:param str version: The custom resource's version (required)
:param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required)
:param str resource_plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required)
:param str pretty: If 'true', then the output is pretty printed.
:param bool allow_watch_bookmarks: allowWatchBookmarks requests watch events with type \"BOOKMARK\". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. If the feature gate WatchBookmarks is not enabled in apiserver, this field is ignored.
:param str _continue: The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the next key, but from the latest snapshot, which is inconsistent from the previous list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the \"next key\". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
@ -2313,7 +2313,7 @@ class CustomObjectsApi(object):
all_params = [
'group',
'version',
'plural',
'resource_plural',
'pretty',
'allow_watch_bookmarks',
'_continue',
@ -2350,10 +2350,10 @@ class CustomObjectsApi(object):
if self.api_client.client_side_validation and ('version' not in local_var_params or # noqa: E501
local_var_params['version'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `version` when calling `list_custom_object_for_all_namespaces`") # noqa: E501
# verify the required parameter 'plural' is set
if self.api_client.client_side_validation and ('plural' not in local_var_params or # noqa: E501
local_var_params['plural'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `plural` when calling `list_custom_object_for_all_namespaces`") # noqa: E501
# verify the required parameter 'resource_plural' is set
if self.api_client.client_side_validation and ('resource_plural' not in local_var_params or # noqa: E501
local_var_params['resource_plural'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `resource_plural` when calling `list_custom_object_for_all_namespaces`") # noqa: E501
collection_formats = {}
@ -2362,8 +2362,8 @@ class CustomObjectsApi(object):
path_params['group'] = local_var_params['group'] # noqa: E501
if 'version' in local_var_params:
path_params['version'] = local_var_params['version'] # noqa: E501
if 'plural' in local_var_params:
path_params['plural'] = local_var_params['plural'] # noqa: E501
if 'resource_plural' in local_var_params:
path_params['resource_plural'] = local_var_params['resource_plural'] # noqa: E501
query_params = []
if 'pretty' in local_var_params and local_var_params['pretty'] is not None: # noqa: E501
@ -2401,7 +2401,7 @@ class CustomObjectsApi(object):
auth_settings = ['BearerToken'] # noqa: E501
return self.api_client.call_api(
'/apis/{group}/{version}/{plural}#', 'GET',
'/apis/{group}/{version}/{resource_plural}', 'GET',
path_params,
query_params,
header_params,

View File

@ -78,7 +78,7 @@ class ApiClient(object):
self.default_headers[header_name] = header_value
self.cookie = cookie
# Set default User-Agent.
self.user_agent = 'OpenAPI-Generator/32.0.0+snapshot/python'
self.user_agent = 'OpenAPI-Generator/32.0.1/python'
self.client_side_validation = configuration.client_side_validation
def __enter__(self):

View File

@ -354,7 +354,7 @@ class Configuration(object):
"OS: {env}\n"\
"Python Version: {pyversion}\n"\
"Version of the API: release-1.32\n"\
"SDK Package Version: 32.0.0+snapshot".\
"SDK Package Version: 32.0.1".\
format(env=sys.platform, pyversion=sys.version)
def get_host_settings(self):

View File

@ -110,7 +110,7 @@ class V1alpha3ResourceClaimStatus(object):
def reserved_for(self):
"""Gets the reserved_for of this V1alpha3ResourceClaimStatus. # noqa: E501
ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated. In a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled. Both schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again. There can be at most 32 such reservations. This may get increased in the future, but not reduced. # noqa: E501
ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated. In a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled. Both schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again. There can be at most 256 such reservations. This may get increased in the future, but not reduced. # noqa: E501
:return: The reserved_for of this V1alpha3ResourceClaimStatus. # noqa: E501
:rtype: list[V1alpha3ResourceClaimConsumerReference]
@ -121,7 +121,7 @@ class V1alpha3ResourceClaimStatus(object):
def reserved_for(self, reserved_for):
"""Sets the reserved_for of this V1alpha3ResourceClaimStatus.
ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated. In a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled. Both schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again. There can be at most 32 such reservations. This may get increased in the future, but not reduced. # noqa: E501
ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated. In a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled. Both schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again. There can be at most 256 such reservations. This may get increased in the future, but not reduced. # noqa: E501
:param reserved_for: The reserved_for of this V1alpha3ResourceClaimStatus. # noqa: E501
:type: list[V1alpha3ResourceClaimConsumerReference]

View File

@ -110,7 +110,7 @@ class V1beta1ResourceClaimStatus(object):
def reserved_for(self):
"""Gets the reserved_for of this V1beta1ResourceClaimStatus. # noqa: E501
ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated. In a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled. Both schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again. There can be at most 32 such reservations. This may get increased in the future, but not reduced. # noqa: E501
ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated. In a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled. Both schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again. There can be at most 256 such reservations. This may get increased in the future, but not reduced. # noqa: E501
:return: The reserved_for of this V1beta1ResourceClaimStatus. # noqa: E501
:rtype: list[V1beta1ResourceClaimConsumerReference]
@ -121,7 +121,7 @@ class V1beta1ResourceClaimStatus(object):
def reserved_for(self, reserved_for):
"""Sets the reserved_for of this V1beta1ResourceClaimStatus.
ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated. In a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled. Both schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again. There can be at most 32 such reservations. This may get increased in the future, but not reduced. # noqa: E501
ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated. In a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled. Both schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again. There can be at most 256 such reservations. This may get increased in the future, but not reduced. # noqa: E501
:param reserved_for: The reserved_for of this V1beta1ResourceClaimStatus. # noqa: E501
:type: list[V1beta1ResourceClaimConsumerReference]

View File

@ -18,7 +18,7 @@ Method | HTTP request | Description
[**get_namespaced_custom_object_scale**](CustomObjectsApi.md#get_namespaced_custom_object_scale) | **GET** /apis/{group}/{version}/namespaces/{namespace}/{plural}/{name}/scale |
[**get_namespaced_custom_object_status**](CustomObjectsApi.md#get_namespaced_custom_object_status) | **GET** /apis/{group}/{version}/namespaces/{namespace}/{plural}/{name}/status |
[**list_cluster_custom_object**](CustomObjectsApi.md#list_cluster_custom_object) | **GET** /apis/{group}/{version}/{plural} |
[**list_custom_object_for_all_namespaces**](CustomObjectsApi.md#list_custom_object_for_all_namespaces) | **GET** /apis/{group}/{version}/{plural}# |
[**list_custom_object_for_all_namespaces**](CustomObjectsApi.md#list_custom_object_for_all_namespaces) | **GET** /apis/{group}/{version}/{resource_plural} |
[**list_namespaced_custom_object**](CustomObjectsApi.md#list_namespaced_custom_object) | **GET** /apis/{group}/{version}/namespaces/{namespace}/{plural} |
[**patch_cluster_custom_object**](CustomObjectsApi.md#patch_cluster_custom_object) | **PATCH** /apis/{group}/{version}/{plural}/{name} |
[**patch_cluster_custom_object_scale**](CustomObjectsApi.md#patch_cluster_custom_object_scale) | **PATCH** /apis/{group}/{version}/{plural}/{name}/scale |
@ -1117,7 +1117,7 @@ Name | Type | Description | Notes
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
# **list_custom_object_for_all_namespaces**
> object list_custom_object_for_all_namespaces(group, version, plural, pretty=pretty, allow_watch_bookmarks=allow_watch_bookmarks, _continue=_continue, field_selector=field_selector, label_selector=label_selector, limit=limit, resource_version=resource_version, resource_version_match=resource_version_match, timeout_seconds=timeout_seconds, watch=watch)
> object list_custom_object_for_all_namespaces(group, version, resource_plural, pretty=pretty, allow_watch_bookmarks=allow_watch_bookmarks, _continue=_continue, field_selector=field_selector, label_selector=label_selector, limit=limit, resource_version=resource_version, resource_version_match=resource_version_match, timeout_seconds=timeout_seconds, watch=watch)
@ -1147,7 +1147,7 @@ with kubernetes.client.ApiClient(configuration) as api_client:
api_instance = kubernetes.client.CustomObjectsApi(api_client)
group = 'group_example' # str | The custom resource's group name
version = 'version_example' # str | The custom resource's version
plural = 'plural_example' # str | The custom resource's plural name. For TPRs this would be lowercase plural kind.
resource_plural = 'resource_plural_example' # str | The custom resource's plural name. For TPRs this would be lowercase plural kind.
pretty = 'pretty_example' # str | If 'true', then the output is pretty printed. (optional)
allow_watch_bookmarks = True # bool | allowWatchBookmarks requests watch events with type \"BOOKMARK\". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. If the feature gate WatchBookmarks is not enabled in apiserver, this field is ignored. (optional)
_continue = '_continue_example' # str | The continue option should be set when retrieving more results from the server. Since this value is server defined, kubernetes.clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the kubernetes.client needs a consistent list, it must restart their list without the continue field. Otherwise, the kubernetes.client may send another list request with the token received with the 410 error, the server will respond with a list starting from the next key, but from the latest snapshot, which is inconsistent from the previous list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the \"next key\". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. (optional)
@ -1160,7 +1160,7 @@ timeout_seconds = 56 # int | Timeout for the list/watch call. This limits the du
watch = True # bool | Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. (optional)
try:
api_response = api_instance.list_custom_object_for_all_namespaces(group, version, plural, pretty=pretty, allow_watch_bookmarks=allow_watch_bookmarks, _continue=_continue, field_selector=field_selector, label_selector=label_selector, limit=limit, resource_version=resource_version, resource_version_match=resource_version_match, timeout_seconds=timeout_seconds, watch=watch)
api_response = api_instance.list_custom_object_for_all_namespaces(group, version, resource_plural, pretty=pretty, allow_watch_bookmarks=allow_watch_bookmarks, _continue=_continue, field_selector=field_selector, label_selector=label_selector, limit=limit, resource_version=resource_version, resource_version_match=resource_version_match, timeout_seconds=timeout_seconds, watch=watch)
pprint(api_response)
except ApiException as e:
print("Exception when calling CustomObjectsApi->list_custom_object_for_all_namespaces: %s\n" % e)
@ -1172,7 +1172,7 @@ Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
**group** | **str**| The custom resource&#39;s group name |
**version** | **str**| The custom resource&#39;s version |
**plural** | **str**| The custom resource&#39;s plural name. For TPRs this would be lowercase plural kind. |
**resource_plural** | **str**| The custom resource&#39;s plural name. For TPRs this would be lowercase plural kind. |
**pretty** | **str**| If &#39;true&#39;, then the output is pretty printed. | [optional]
**allow_watch_bookmarks** | **bool**| allowWatchBookmarks requests watch events with type \&quot;BOOKMARK\&quot;. Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server&#39;s discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. If the feature gate WatchBookmarks is not enabled in apiserver, this field is ignored. | [optional]
**_continue** | **str**| The continue option should be set when retrieving more results from the server. Since this value is server defined, kubernetes.clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the kubernetes.client needs a consistent list, it must restart their list without the continue field. Otherwise, the kubernetes.client may send another list request with the token received with the 410 error, the server will respond with a list starting from the next key, but from the latest snapshot, which is inconsistent from the previous list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the \&quot;next key\&quot;. This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. | [optional]

View File

@ -6,7 +6,7 @@ Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**allocation** | [**V1alpha3AllocationResult**](V1alpha3AllocationResult.md) | | [optional]
**devices** | [**list[V1alpha3AllocatedDeviceStatus]**](V1alpha3AllocatedDeviceStatus.md) | Devices contains the status of each device allocated for this claim, as reported by the driver. This can include driver-specific information. Entries are owned by their respective drivers. | [optional]
**reserved_for** | [**list[V1alpha3ResourceClaimConsumerReference]**](V1alpha3ResourceClaimConsumerReference.md) | ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated. In a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled. Both schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again. There can be at most 32 such reservations. This may get increased in the future, but not reduced. | [optional]
**reserved_for** | [**list[V1alpha3ResourceClaimConsumerReference]**](V1alpha3ResourceClaimConsumerReference.md) | ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated. In a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled. Both schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again. There can be at most 256 such reservations. This may get increased in the future, but not reduced. | [optional]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -6,7 +6,7 @@ Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**allocation** | [**V1beta1AllocationResult**](V1beta1AllocationResult.md) | | [optional]
**devices** | [**list[V1beta1AllocatedDeviceStatus]**](V1beta1AllocatedDeviceStatus.md) | Devices contains the status of each device allocated for this claim, as reported by the driver. This can include driver-specific information. Entries are owned by their respective drivers. | [optional]
**reserved_for** | [**list[V1beta1ResourceClaimConsumerReference]**](V1beta1ResourceClaimConsumerReference.md) | ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated. In a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled. Both schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again. There can be at most 32 such reservations. This may get increased in the future, but not reduced. | [optional]
**reserved_for** | [**list[V1beta1ResourceClaimConsumerReference]**](V1beta1ResourceClaimConsumerReference.md) | ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated. In a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled. Both schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again. There can be at most 256 such reservations. This may get increased in the future, but not reduced. | [optional]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -13,13 +13,15 @@
# under the License.
import unittest
from decimal import Decimal
from os import path
import yaml
from kubernetes import utils, client
from kubernetes import client, utils
from kubernetes.client.rest import ApiException
from kubernetes.e2e_test import base
from kubernetes.utils import quantity
class TestUtils(unittest.TestCase):
@ -605,3 +607,113 @@ class TestUtils(unittest.TestCase):
name="mock-pod-1", namespace=self.test_namespace, body={})
app_api.delete_namespaced_deployment(
name="mock", namespace=self.test_namespace, body={})
class TestUtilsUnitTests(unittest.TestCase):
def test_parse_quantity(self):
# == trivial returns ==
self.assertEqual(quantity.parse_quantity(Decimal(1)), Decimal(1))
self.assertEqual(quantity.parse_quantity(float(1)), Decimal(1))
self.assertEqual(quantity.parse_quantity(1), Decimal(1))
# == exceptions ==
self.assertRaises(
ValueError, lambda: quantity.parse_quantity("1000kb")
)
self.assertRaises(
ValueError, lambda: quantity.parse_quantity("1000ki")
)
self.assertRaises(ValueError, lambda: quantity.parse_quantity("1000foo"))
self.assertRaises(ValueError, lambda: quantity.parse_quantity("foo"))
# == no suffix ==
self.assertEqual(quantity.parse_quantity("1000"), Decimal(1000))
# == base 1024 ==
self.assertEqual(quantity.parse_quantity("1Ki"), Decimal(1024))
self.assertEqual(quantity.parse_quantity("1Mi"), Decimal(1024**2))
self.assertEqual(quantity.parse_quantity("1Gi"), Decimal(1024**3))
self.assertEqual(quantity.parse_quantity("1Ti"), Decimal(1024**4))
self.assertEqual(quantity.parse_quantity("1Pi"), Decimal(1024**5))
self.assertEqual(quantity.parse_quantity("1Ei"), Decimal(1024**6))
self.assertEqual(quantity.parse_quantity("1024Ki"), Decimal(1024**2))
self.assertEqual(quantity.parse_quantity("0.5Ki"), Decimal(512))
# == base 1000 ==
self.assertAlmostEqual(quantity.parse_quantity("1n"), Decimal(0.000_000_001))
self.assertAlmostEqual(quantity.parse_quantity("1u"), Decimal(0.000_001))
self.assertAlmostEqual(quantity.parse_quantity("1m"), Decimal(0.001))
self.assertEqual(quantity.parse_quantity("1k"), Decimal(1_000))
self.assertEqual(quantity.parse_quantity("1M"), Decimal(1_000_000))
self.assertEqual(quantity.parse_quantity("1G"), Decimal(1_000_000_000))
self.assertEqual(quantity.parse_quantity("1T"), Decimal(1_000_000_000_000))
self.assertEqual(quantity.parse_quantity("1P"), Decimal(1_000_000_000_000_000))
self.assertEqual(
quantity.parse_quantity("1E"), Decimal(1_000_000_000_000_000_000))
self.assertEqual(quantity.parse_quantity("1000k"), Decimal(1_000_000))
self.assertEqual(quantity.parse_quantity("500k"), Decimal(500_000))
def test_format_quantity(self):
"""Unit test for quantity.format_quantity. Testing the different SI suffixes and
function should return the expected string"""
# == unknown suffixes ==
self.assertRaises(
ValueError, lambda: quantity.format_quantity(Decimal(1_000), "kb")
)
self.assertRaises(
ValueError, lambda: quantity.format_quantity(Decimal(1_000), "ki")
)
self.assertRaises(
ValueError, lambda: quantity.format_quantity(Decimal(1_000), "foo")
)
# == no suffix ==
self.assertEqual(quantity.format_quantity(Decimal(1_000), ""), "1000")
self.assertEqual(quantity.format_quantity(Decimal(1_000), None), "1000")
# == base 1024 ==
self.assertEqual(quantity.format_quantity(Decimal(1024), "Ki"), "1Ki")
self.assertEqual(quantity.format_quantity(Decimal(1024**2), "Mi"), "1Mi")
self.assertEqual(quantity.format_quantity(Decimal(1024**3), "Gi"), "1Gi")
self.assertEqual(quantity.format_quantity(Decimal(1024**4), "Ti"), "1Ti")
self.assertEqual(quantity.format_quantity(Decimal(1024**5), "Pi"), "1Pi")
self.assertEqual(quantity.format_quantity(Decimal(1024**6), "Ei"), "1Ei")
self.assertEqual(quantity.format_quantity(Decimal(1024**2), "Ki"), "1024Ki")
self.assertEqual(quantity.format_quantity(Decimal((1024**3) / 2), "Gi"), "0.5Gi")
# Decimal((1024**3)/3) are 0.3333333333333333148296162562Gi; expecting to
# be quantized to 0.3Gi
self.assertEqual(
quantity.format_quantity(
Decimal(
(1024**3) / 3),
"Gi",
quantize=Decimal(.5)),
"0.3Gi")
# == base 1000 ==
self.assertEqual(quantity.format_quantity(Decimal(0.000_000_001), "n"), "1n")
self.assertEqual(quantity.format_quantity(Decimal(0.000_001), "u"), "1u")
self.assertEqual(quantity.format_quantity(Decimal(0.001), "m"), "1m")
self.assertEqual(quantity.format_quantity(Decimal(1_000), "k"), "1k")
self.assertEqual(quantity.format_quantity(Decimal(1_000_000), "M"), "1M")
self.assertEqual(quantity.format_quantity(Decimal(1_000_000_000), "G"), "1G")
self.assertEqual(
quantity.format_quantity(Decimal(1_000_000_000_000), "T"), "1T"
)
self.assertEqual(
quantity.format_quantity(Decimal(1_000_000_000_000_000), "P"), "1P"
)
self.assertEqual(
quantity.format_quantity(Decimal(1_000_000_000_000_000_000), "E"), "1E"
)
self.assertEqual(quantity.format_quantity(Decimal(1_000_000), "k"), "1000k")
# Decimal(1_000_000/3) are 333.3333333333333139307796955k; expecting to
# be quantized to 333k
self.assertEqual(
quantity.format_quantity(
Decimal(1_000_000 / 3), "k", quantize=Decimal(1000)
),
"333k",
)

View File

@ -15123,7 +15123,7 @@
"x-kubernetes-list-type": "map"
},
"reservedFor": {
"description": "ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated.\n\nIn a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled.\n\nBoth schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again.\n\nThere can be at most 32 such reservations. This may get increased in the future, but not reduced.",
"description": "ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated.\n\nIn a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled.\n\nBoth schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again.\n\nThere can be at most 256 such reservations. This may get increased in the future, but not reduced.",
"items": {
"$ref": "#/definitions/io.k8s.api.resource.v1alpha3.ResourceClaimConsumerReference"
},
@ -15956,7 +15956,7 @@
"x-kubernetes-list-type": "map"
},
"reservedFor": {
"description": "ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated.\n\nIn a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled.\n\nBoth schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again.\n\nThere can be at most 32 such reservations. This may get increased in the future, but not reduced.",
"description": "ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated.\n\nIn a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled.\n\nBoth schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again.\n\nThere can be at most 256 such reservations. This may get increased in the future, but not reduced.",
"items": {
"$ref": "#/definitions/io.k8s.api.resource.v1beta1.ResourceClaimConsumerReference"
},

View File

@ -13,6 +13,19 @@
# limitations under the License.
from decimal import Decimal, InvalidOperation
_EXPONENTS = {
"n": -3,
"u": -2,
"m": -1,
"K": 1,
"k": 1,
"M": 2,
"G": 3,
"T": 4,
"P": 5,
"E": 6,
}
def parse_quantity(quantity):
"""
@ -35,17 +48,14 @@ def parse_quantity(quantity):
if isinstance(quantity, (int, float, Decimal)):
return Decimal(quantity)
exponents = {"n": -3, "u": -2, "m": -1, "K": 1, "k": 1, "M": 2,
"G": 3, "T": 4, "P": 5, "E": 6}
quantity = str(quantity)
number = quantity
suffix = None
if len(quantity) >= 2 and quantity[-1] == "i":
if quantity[-2] in exponents:
if quantity[-2] in _EXPONENTS:
number = quantity[:-2]
suffix = quantity[-2:]
elif len(quantity) >= 1 and quantity[-1] in exponents:
elif len(quantity) >= 1 and quantity[-1] in _EXPONENTS:
number = quantity[:-1]
suffix = quantity[-1:]
@ -68,8 +78,65 @@ def parse_quantity(quantity):
if suffix == "ki":
raise ValueError("{} has unknown suffix".format(quantity))
if suffix[0] not in exponents:
if suffix[0] not in _EXPONENTS:
raise ValueError("{} has unknown suffix".format(quantity))
exponent = Decimal(exponents[suffix[0]])
exponent = Decimal(_EXPONENTS[suffix[0]])
return number * (base ** exponent)
def format_quantity(quantity_value, suffix, quantize=None) -> str:
"""
Takes a decimal and produces a string value in kubernetes' canonical quantity form,
like "200Mi".Users can specify an additional decimal number to quantize the output.
Example - Relatively increase pod memory limits:
# retrieve my_pod
current_memory: Decimal = parse_quantity(my_pod.spec.containers[0].resources.limits.memory)
desired_memory = current_memory * 1.2
desired_memory_str = format_quantity(desired_memory, suffix="Gi", quantize=Decimal(1))
# patch pod with desired_memory_str
'quantize=Decimal(1)' ensures that the result does not contain any fractional digits.
Supported SI suffixes:
base1024: Ki | Mi | Gi | Ti | Pi | Ei
base1000: n | u | m | "" | k | M | G | T | P | E
See https://github.com/kubernetes/apimachinery/blob/master/pkg/api/resource/quantity.go
Input:
quantity: Decimal. Quantity as a number which is supposed to converted to a string
with SI suffix.
suffix: string. The desired suffix/unit-of-measure of the output string
quantize: Decimal. Can be used to round/quantize the value before the string
is returned. Defaults to None.
Returns:
string. Canonical Kubernetes quantity string containing the SI suffix.
Raises:
ValueError if the SI suffix is not supported.
"""
if not suffix:
return str(quantity_value)
if suffix.endswith("i"):
base = 1024
elif len(suffix) == 1:
base = 1000
else:
raise ValueError(f"{quantity_value} has unknown suffix")
if suffix == "ki":
raise ValueError(f"{quantity_value} has unknown suffix")
if suffix[0] not in _EXPONENTS:
raise ValueError(f"{quantity_value} has unknown suffix")
different_scale = quantity_value / Decimal(base ** _EXPONENTS[suffix[0]])
if quantize:
different_scale = different_scale.quantize(quantize)
return str(different_scale) + suffix

View File

@ -18,13 +18,13 @@ import sys
KUBERNETES_BRANCH = "release-1.32"
# client version for packaging and releasing.
CLIENT_VERSION = "32.0.0+snapshot"
CLIENT_VERSION = "32.0.1"
# Name of the release package
PACKAGE_NAME = "kubernetes"
# Stage of development, mainly used in setup.py's classifiers.
DEVELOPMENT_STATUS = "3 - Alpha"
DEVELOPMENT_STATUS = "5 - Production/Stable"
# If called directly, return the constant value given

View File

@ -15199,7 +15199,7 @@
"x-kubernetes-list-type": "map"
},
"reservedFor": {
"description": "ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated.\n\nIn a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled.\n\nBoth schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again.\n\nThere can be at most 32 such reservations. This may get increased in the future, but not reduced.",
"description": "ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated.\n\nIn a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled.\n\nBoth schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again.\n\nThere can be at most 256 such reservations. This may get increased in the future, but not reduced.",
"items": {
"$ref": "#/definitions/v1alpha3.ResourceClaimConsumerReference"
},
@ -16032,7 +16032,7 @@
"x-kubernetes-list-type": "map"
},
"reservedFor": {
"description": "ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated.\n\nIn a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled.\n\nBoth schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again.\n\nThere can be at most 32 such reservations. This may get increased in the future, but not reduced.",
"description": "ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated.\n\nIn a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled.\n\nBoth schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again.\n\nThere can be at most 256 such reservations. This may get increased in the future, but not reduced.",
"items": {
"$ref": "#/definitions/v1beta1.ResourceClaimConsumerReference"
},
@ -98896,7 +98896,7 @@
}
}
},
"/apis/{group}/{version}/{plural}#\u200e": {
"/apis/{group}/{version}/{resource_plural}": {
"parameters": [
{
"uniqueItems": true,
@ -98920,7 +98920,7 @@
"type": "string"
},
{
"name": "plural",
"name": "resource_plural",
"in": "path",
"required": true,
"description": "The custom resource's plural name. For TPRs this would be lowercase plural kind.",

View File

@ -16,9 +16,9 @@ from setuptools import setup
# Do not edit these constants. They will be updated automatically
# by scripts/update-client.sh.
CLIENT_VERSION = "32.0.0+snapshot"
CLIENT_VERSION = "32.0.1"
PACKAGE_NAME = "kubernetes"
DEVELOPMENT_STATUS = "3 - Alpha"
DEVELOPMENT_STATUS = "5 - Production/Stable"
# To install the library, run the following
#