Adds certificates.k8s.io Adoptions Design Document

Signed-off-by: joshvanl <vleeuwenjoshua@gmail.com>
This commit is contained in:
joshvanl 2021-02-09 16:48:54 +00:00
parent 218408a741
commit 5f98ba69f4

View File

@ -0,0 +1,122 @@
---
title: certificates.k8s.io Adoption
authors:
- "@joshvanl"
reviewers:
- "@joshvanl"
approvers:
- "@munnerz"
- @jetstack/team-cert-manager
editor: "@joshvanl"
creation-date: 2021-02-09
last-updated: 2021-02-09
status: provisional
---
# certificates.k8s.io Adoption
## Table of Contents
<!-- toc -->
- [Summary](#summary)
- [Motivation](#motivation)
- [Goals](#goals)
- [Non-Goals](#non-goals)
- [Proposal](#proposal)
- [Signers](#signers)
- [API Changes](#api-changes)
- [Upgrading](#upgrading)
- [Risks and Mitigations](#risks-and-mitigations)
<!-- /toc -->
## Summary
In Kubernetes v1.19 the
[`CertificateSigningRequest`](https://github.com/kubernetes/api/blob/48bd8381a38a486f8b3cdf28cf7334a45b182f2e/certificates/v1/types.go#L41)
resource graduated to `certificates.k8s.io/v1`. This makes the concept of
requesting, signing, and consuming certificates in Kubernetes first class
objects. cert-manager is well placed to serve this resource type as it becomes
more popular in the wider community, whilst preserving all the features and
extensions that cert-manager has to offer.
## Motivation
With the extensiveness of cert-managers ecosystem, it is well placed to manage
the core Kubernetes resource type. This gives cert-manager the ability to
integrate with components that it otherwise wouldn't have, without changes to the
third-party project.
### Goals
- Add `CertificateSigningRequest` signer controllers to cert-manager for all
`Issuer` types
### Non-Goals
- Remove the `CertificateRequest` resource in favour of
`CertificateSigningRequest`
- Change the `Certificate` controllers behaviour
- Support `CertificateSigningRequest` pre v1
- Make changes to upstream Kubernetes to implement controllers in cert-manager
## Proposal
Instead of the concept of an `IssuerRef` for `CertificateRequest`s,
`CertificateSigningRequest`s have the concept of a `SignerName`. Since
`CertificateSigningRequest`s are cluster scoped resources, the signer name can
be directly mapped to the `ClusterIssuer` resource. `ClusterIssuers` will be
referenced in the following format:
```yaml
signerName: cert-manager.io/${ClusterIssuer}
```
Each `CertificateSigningRequest` controller will behave in the same way as the
existing `CertificateRequest` resource, by getting the referenced
`ClusterIssuer`, and attempting to sign. If the `ClusterIssuer` type is not
managed by this controller, do nothing, else sign.
Each `CertificateSigningRequest` controller will set the
`CertificateSigningRequest` `RequestCondition` to `Approved` if the request is
for their managed `ClusterIssuer` type, and the `ClusterIssuer` is ready. If
the controller does manage that `ClusterIssuer` type, but the `ClusterIssuer`
doesn't exist or is not ready, the controller will set the condition to
`Failed`.
### Signers
Some special cases for some `ClusterIssuer`s that need to be addressed:
- SelfSigned: Makes use of annotations, and so these annotations should also be
present on `CertificateSigningRequest`s.
- Venafi: Makes use of annotations, and so these annotations should also be
present on `CertificateSigningRequest`s.
- ACME: The ACME controller creates sub-resources (`Orders`). Since
`CertificateSigningRequest`s are cluster scoped resources, we should create
`Orders` in the `Cluster Resource Namespace` (default `cert-manager`).
### API Changes
No API changes are required to support this resource.
### Upgrading
No effect to upgrades as only additional controllers added. No API changes.
### Risks and Mitigations
The controllers need to be aware if the `certificates.k8s.io/v1`
`CertificateSigningRequest` resource exists (pre v1.19 cluster). If they don't
exist, they should gracefully never start. If the Kubernetes API server were to
be upgraded to a version that does support this resource, cert-manager will need
to be restarted to make use of these controllers. This is acceptable- worker
nodes are typically always restarted during a cluster upgrade.