Install the Controller in Kubernetes
ziti-controller
Host an OpenZiti controller in Kubernetes
Requirements
Add the OpenZiti Charts Repo with Helm.
helm repo add openziti https://docs.openziti.io/helm-charts/
This chart runs a Ziti controller in Kubernetes.
Mutual TLS
Ziti's TLS server ports must be published with a TLS passthrough to allow the controller to validate the client certificates from routers and identities. This may be done with a Traefik IngressRouteTCP, Gateway API TLSRoute, Ingress, NodePort, LoadBalancer, etc. The ctrl plane and management API share the client API's TLS listener by default, so there's one TCP port by default that must be published with TLS passthrough enabled.
Certificates
It is not normally necessary to obtain publicly trusted certificates for Ziti's TLS servers. Ziti manages the trust relationships between the controller and routers and clients independent of the web's root authorities. See the Alternative Web Server Certificates section for more information.
Deployment
The deployment must have exactly one replica.
Custom Resources
This chart requires the custom resources provided by cert-manager and trust-manager, i.e., Issuer, Certificate, and Bundle. It is a limitation of Trust Manager to have one instance per cluster and one namespace from which trust Bundle inputs may be sourced, so a single Ziti controller may occupy the cluster unless your use case allows for controllers from multiple networks to share a namespace. You must set the Trust Manager's "trust namespace" to the namespace of the Ziti controller so that it will be able to compose a trust Bundle resource from Ziti's root CA cert(s).
helm repo add jetstack https://charts.jetstack.io
helm upgrade --install cert-manager jetstack/cert-manager \
--namespace cert-manager --create-namespace \
--set crds.enabled=true
kubectl create namespace ziti
helm upgrade --install trust-manager jetstack/trust-manager \
--namespace cert-manager \
--set crds.keep=false \
--set app.trust.namespace=ziti
Breaking Change
Version 2 of this chart introduces a breaking change. You must decouple cert-manager and trust-manager from the Ziti controller chart if they were previously installed as subcharts. This allows them to be upgraded and configured independently of the Ziti controller chart.
Symptom
Error: Unable to continue with install: CustomResourceDefinition "certificaterequests.cert-manager.io" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "cert-manager"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "cert-manager"
Cause
Cert Manager and Trust Manager are no longer included as subcharts, so upgrading the Ziti controller chart will delete the cert-manager and trust-manager Operators along with their respective CRDs and associated resources which are critical for the Ziti controller.
Solution
- As with any controller upgrade, you are advised to back up the database before proceeding so that you will have the option to roll back to a snapshot prior to any irreversible database schema migrations that may occur during an upgrade.
- Upgrade the Ziti controller Helm release to chart v2. This will temporarily uninstall the CM and TM Helm releases if they were originally installed as dependencies of the Ziti controller chart.
- Install or upgrade as desired the cert-manager and trust-manager Helm releases (see Custom Resources section above for an example that is compatible with this upgrade path).
- If the cert-manager or trust-manager charts fail to install with the "symptom" above, then run the provided BASH script (
chown-cert-manager.bash
) to set the owner labels and annotations on existing cert-manager and trust-manager CRDs and resources. - Retry installing cert-manager and trust-manager Helm charts. When they are installed successfully, their respective Helm releases will own the CRDs that were annotated and labeled by the provided BASH script.
You must use the same values for CM and TM Helm release names and namespaces when you run the provided script and when you re-install the cert-manager and trust-manager Helm charts.
Assuming your future CM release will be named "cert-manager," your future TM release will be named "trust-manager," and both will be installed in namespace "cert-manager," and your Ziti controller is installed in namespace "ziti," you can use these example values to run the provided script to pave the way to installing the version 2 ziti-controller chart, which will delete the cert-manager and trust-manager Operators, preserving the CRDs and their associated resources.
helm pull openziti/ziti-controller
tar -xvf ziti-controller-*.tgz
CM_NAMESPACE=cert-manager \
CM_RELEASE_NAME=cert-manager \
TM_NAMESPACE=cert-manager \
TM_RELEASE_NAME=trust-manager \
ZITI_NAMESPACE=ziti \
./ziti-controller/files/chown-cert-manager.bash
NodePort Service Example
Value | Description |
---|---|
clientApi.advertisedHost | the address that clients and routers will use to reach this controller |
clientApi.advertisedPort | the TCP port associated with the advertisedHost |
clientApi.service.type | the service type for the client API and router control plane |
helm upgrade ziti-controller openziti/ziti-controller \
--install \
--namespace ziti \
--create-namespace \
--set clientApi.advertisedHost=ctrl1.ziti.example.com \
--set clientApi.advertisedPort=32171 \
--set clientApi.service.type=NodePort
Here's the YAML representation of the same set of input values.
clientApi:
advertisedHost: ctrl1.ziti.example.com
advertisedPort: 32171
service:
type: NodePort
Visit the Ziti Administration Console (ZAC): https://ctrl1.ziti.example.com:32171/zac/
Log in with the ziti
CLI.
ziti edge login ctrl1.ziti.example.com:32171 --yes --username admin --password $(
kubectl -n ziti get secrets ziti-controller-admin-secret -o go-template='{{index .data "admin-password" | base64decode }}'
)
Nginx Ingress Example
Here's an example of using the community ingress-nginx
chart to provision ingresses for the controller's ClusterIP
services.
Ensure you have the ingress-nginx
chart installed with controller.extraArgs.enable-ssl-passthrough=true
. You can verify this feature is enabled by running kubectl describe pods {ingress-nginx-controller pod}
and checking the args for --enable-ssl-passthrough=true
.
If necessary, patch the ingress-nginx
deployment to enable TLS passthrough.
kubectl patch deployment "ingress-nginx-controller" \
--namespace ingress-nginx \
--type json \
--patch '[{"op": "add",
"path": "/spec/template/spec/containers/0/args/-",
"value":"--enable-ssl-passthrough"
}]'
Create a Helm chart values file like this.
# ziti-controller-values.yml
clientApi:
advertisedHost: ctrl1.ziti.example.com
advertisedPort: 443
service:
type: ClusterIP
ingress:
enabled: true
ingressClassName: nginx
annotations:
kubernetes.io/ingress.allow-http: "false"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
Now install or upgrade this controller chart with your values file.
helm upgrade ziti-controller openziti/ziti-controller \
--install \
--namespace ziti \
--values ziti-controller-values.yml
Visit the Ziti Administration Console (ZAC): https://ctrl1.ziti.example.com/zac/
Log in with the ziti
CLI.
ziti edge login ctrl1.ziti.example.com:443 --yes --username admin --password $(
kubectl -n ziti get secrets ziti-controller-admin-secret -o go-template='{{index .data "admin-password" | base64decode }}'
)
Traefik IngressRouteTCP Example
This will create a Traefik IngressRouteTCP with TLS passthrough for the client API's ClusterIP service.
helm upgrade ziti-controller openziti/ziti-controller \
--install \
--namespace ziti \
--create-namespace \
--set clientApi.advertisedHost=ctrl1.ziti.example.com \
--set clientApi.advertisedPort=443 \
--set clientApi.service.type=ClusterIP \
--set clientApi.traefikTcpRoute.enabled=true
Visit the Ziti Administration Console (ZAC): https://ctrl1.ziti.example.com/zac/
Log in with the ziti
CLI.
ziti edge login ctrl1.ziti.example.com:443 --yes --username admin --password $(
kubectl -n ziti get secrets ziti-controller-admin-secret -o go-template='{{index .data "admin-password" | base64decode }}'
)
Admin User and Password
A default admin user and password will be generated and saved to a secret during installation. The credentials can be retrieved using this command.
kubectl get secret \
-n ziti ziti-controller-admin-secret \
-o go-template='{{range $k,$v := .data}}{{printf "%s: " $k}}{{if not $v}}{{$v}}{{else}}{{$v | base64decode}}{{end}}{{"\n"}}{{end}}'
Extra Security for the Management API
You can split the client and management APIs into separate cluster services by setting managementApi.service.enabled=true
. With this configuration, you'll have an additional cluster service named {release}-mgmt
that is the management API, and the client API will not have management features.
This Helm chart's values allow for both operational scenarios: combined and split. The default choice is to expose the combined client and management APIs as the cluster service named {release}-client
, which is convenient because you can use the ziti
CLI immediately. For additional security, you may shelter the management API by splitting these two sets of features, exposing them as separate API servers. After the split, you can access the management API in several ways:
- deploy a tunneler to bind a Ziti service targeting
{release}-mgmt.{namespace}.svc:{port}
. kubectl -n {namespace} port-forward deployments/{release}-mgmt 8443:{port}
The web console (ZAC) is always bound to the same web listener as the management API, so you can access it at that /zac/
path on the same URL.
Advanced PKI
The default configuration generates a singular PKI root of trust for all the controller's servers and the edge signer CA. Optionally, you may provide the name of a cert-manager Issuer or ClusterIssuer to become the root of trust for the Ziti controller's identity.
Merge this with your Helm chart values file before installing or upgrading.
ctrlPlane:
issuer:
kind: ClusterIssuer
name: my-alternative-cluster-issuer
You may also configure the Ziti controller to use separate PKI roots of trust for its three main identities: control plane, edge signer, and web bindings.
For example, to use a separate CA for the edge signer function, merge this with your Helm chart values file before installing or upgrading.
edgeSignerPki:
enabled: true
Prometheus Monitoring
This chart provides a default disabled ziti-controller-prometheus
k8s service for prometheus,
which can be enabled with prometheus.service.enabled
. Enabling it will create a prometheus ServiceMonitor
for configuring the prometheus endpoint. It is also important that you enable
fabric.events.enabled
for getting a full set of metrics.
For more information, please check here.
Values Reference
Key | Type | Default | Description |
---|---|---|---|
additionalConfigs | object | {"ctrl":{},"events":{},"healthChecks":{},"network":{},"web":{}} | Append additional config blocks in specific top-level keys: edge, web, network, ctrl. If events are defined here, they replace the default events section entirely. |
additionalVolumes | list | [] | additional volumes to mount to ziti-controller container |
affinity | object | {} | deployment template spec affinity |
ca.clusterDomain | string | "cluster.local" | Set a custom cluster domain if other than cluster.local |
ca.duration | string | "87840h" | Go time.Duration string format |
ca.renewBefore | string | "720h" | Go time.Duration string format |
cert.duration | string | "87840h" | server certificate duration as Go time.Duration string format |
cert.renewBefore | string | "720h" | rewnew server certificates before expiry as Go time.Duration string format |
clientApi.advertisedHost | string | "" | global DNS name by which routers can resolve a reachable IP for this service |
clientApi.advertisedPort | int | 443 | cluster service, node port, load balancer, and ingress port |
clientApi.altDnsNames | list | [] | besides advertisedHost and dnsNames, add these DNS SANs to any ingresses but not the web identity |
clientApi.containerPort | int | 1280 | cluster service target port on the container |
clientApi.dnsNames | list | [] | besides advertisedHost, add these DNS SANs to the web identity and any ingresses |
clientApi.ingress.annotations | object | {} | ingress annotations, e.g., to configure ingress-nginx |
clientApi.ingress.enabled | bool | false | create a TLS-passthrough ingress for the client API's ClusterIP service |
clientApi.ingress.ingressClassName | string | "" | ingress class name, e.g., "nginx" |
clientApi.ingress.labels | object | {} | ingress labels |
clientApi.ingress.tls | object | {} | deprecated: tls passthrough is required |
clientApi.service.enabled | bool | true | create a cluster service for the deployment |
clientApi.service.type | string | "LoadBalancer" | expose the service as a ClusterIP, NodePort, or LoadBalancer |
clientApi.traefikTcpRoute.enabled | bool | false | enable Traefik IngressRouteTCP |
clientApi.traefikTcpRoute.entryPoints | list | ["websecure"] | IngressRouteTCP entrypoints |
clientApi.traefikTcpRoute.labels | object | {} | IngressRouteTCP labels |
consoleAltIngress | object | {} | override the address printed in Helm release notes if you configured an alternative DNS SAN for the console |
ctrlPlane.advertisedHost | string | "{{ .Values.clientApi.advertisedHost }}" | global DNS name by which routers can resolve a reachable IP for this service: default is cluster service DNS name which assumes all routers are inside the same cluster |
ctrlPlane.advertisedPort | string | "{{ .Values.clientApi.advertisedPort }}" | cluster service, node port, load balancer, and ingress port |
ctrlPlane.alternativeIssuer | object | {} | obtain the ctrl plane identity from an existing issuer instead of generating a new PKI |
ctrlPlane.containerPort | string | "{{ .Values.clientApi.containerPort }}" | cluster service target port on the container |
ctrlPlane.dnsNames | list | [] | besides advertisedHost, add these DNS SANs to the ctrl plane identity and any ctrl plane ingresses |
ctrlPlane.ingress.annotations | object | {} | ingress annotations, e.g., to configure ingress-nginx |
ctrlPlane.ingress.enabled | bool | false | create an ingress for the cluster service |
ctrlPlane.ingress.ingressClassName | string | "" | ingress class name, e.g., "nginx" |
ctrlPlane.ingress.labels | object | {} | ingress labels |
ctrlPlane.ingress.tls | object | {} | deprecated: tls passthrough is required |
ctrlPlane.service.enabled | bool | false | create a separate cluster service for the ctrl plane; enabling this requires you to also set the host and port for a separate ctrl plane TLS listener |
ctrlPlane.service.type | string | "ClusterIP" | expose the service as a ClusterIP, NodePort, or LoadBalancer |
ctrlPlane.traefikTcpRoute.enabled | bool | false | enable Traefik IngressRouteTCP |
ctrlPlane.traefikTcpRoute.entryPoints | list | ["websecure"] | IngressRouteTCP entrypoints |
ctrlPlane.traefikTcpRoute.labels | object | {} | IngressRouteTCP labels |
ctrlPlaneCasBundle.namespaceSelector | object | {} | namespaces where trust-manager will create the Bundle resource containing Ziti's trusted CA certs (default: empty means all namespaces) |
customAdminSecretName | string | "" | set the admin user and password from a custom secret The custom admin secret must be of the following format: apiVersion: v1 kind: Secret metadata: name: myCustomAdminSecret type: Opaque data: admin-user: admin-password: |
dbFile | string | "ctrl.db" | name of the BoltDB file |
edgeSignerPki.admin_client_cert | object | {"duration":"8760h","enabled":false,"renewBefore":"720h"} | metadata name of the alternative issuer name: |
edgeSignerPki.admin_client_cert.duration | string | "8760h" | admin client certificate duration as Go time.Duration |
edgeSignerPki.admin_client_cert.enabled | bool | false | create a client certificate for the admin user |
edgeSignerPki.admin_client_cert.renewBefore | string | "720h" | renew admin client certificate before expiry as Go time.Duration |
edgeSignerPki.alternativeIssuer | object | {} | obtain the edge signer intermediate CA from an existing issuer instead of generating a new PKI |
edgeSignerPki.enabled | bool | true | generate a separate PKI root of trust for the edge signer CA |
env | object | {} | set name to value in containers' environment |
envSecrets | object | {} | set secrets as environment variables in the container |
fabric.events.enabled | bool | false | enable fabric event logger and file handler |
fabric.events.fileName | string | "fabric-events.json" | |
fabric.events.mountDir | string | "/var/run/ziti" | |
fabric.events.network.intervalAgeThreshold | string | "5s" | matching interval age and reporting interval ensures coherent metrics from fabric events |
fabric.events.network.metricsReportInterval | string | "5s" | matching interval age and reporting interval ensures coherent metrics from fabric events |
fabric.events.subscriptions[0].type | string | "fabric.circuits" | |
fabric.events.subscriptions[1].type | string | "fabric.links" | |
fabric.events.subscriptions[2].type | string | "fabric.routers" | |
fabric.events.subscriptions[3].type | string | "fabric.terminators" | |
fabric.events.subscriptions[4].metricFilter | string | ".*" | |
fabric.events.subscriptions[4].sourceFilter | string | ".*" | |
fabric.events.subscriptions[4].type | string | "metrics" | |
fabric.events.subscriptions[5].type | string | "edge.sessions" | |
fabric.events.subscriptions[6].type | string | "edge.apiSessions" | |
fabric.events.subscriptions[7].type | string | "fabric.usage" | |
fabric.events.subscriptions[7].version | int | 3 | |
fabric.events.subscriptions[8].type | string | "services" | |
fabric.events.subscriptions[9].interval | string | "5s" | |
fabric.events.subscriptions[9].type | string | "edge.entityCounts" | |
highAvailability.mode | string | "standalone" | Ziti controller HA mode |
highAvailability.replicas | int | 1 | Ziti controller HA swarm replicas |
image.additionalArgs | list | [] | additional arguments can be passed directly to the container to modify ziti runtime arguments |
image.args | list | ["{{ include \"configMountDir\" . }}/ziti-controller.yaml"] | args for the entrypoint command |
image.command | list | ["ziti","controller","run"] | container entrypoint command |
image.homeDir | string | "/home/ziggy" | homeDir for admin login shell must align with container image's ~/.bashrc for ziti CLI auto-complete to work |
image.pullPolicy | string | "IfNotPresent" | deployment image pull policy |
image.repository | string | "docker.io/openziti/ziti-controller" | container image repository for app deployment |
image.tag | string | "" | override the container image tag specified in the chart |
managementApi | object | {"advertisedHost":"{{ .Values.clientApi.advertisedHost }}","advertisedPort":"{{ .Values.clientApi.advertisedPort }}","altDnsNames":[],"containerPort":1281,"dnsNames":[],"ingress":{"annotations":{},"enabled":false,"ingressClassName":"","labels":{},"tls":{}},"service":{"enabled":false,"type":"ClusterIP"},"traefikTcpRoute":{"enabled":false,"entryPoints":["websecure"],"labels":{}}} | by default, there's no need for a separate cluster service, ingress, or load balancer for the management API because it shares a TLS listener with the client API, and is reachable at the same address and presents the same web identity cert; you may configure a separate service, ingress, load balancer, etc. for the management API by setting managementApi.service.enabled=true |
managementApi.advertisedHost | string | "{{ .Values.clientApi.advertisedHost }}" | global DNS name by which routers can resolve a reachable IP for this service |
managementApi.advertisedPort | string | "{{ .Values.clientApi.advertisedPort }}" | cluster service, node port, load balancer, and ingress port |
managementApi.altDnsNames | list | [] | besides advertisedHost and dnsNames, add these DNS SANs to any mgmt api ingresses, but not the web identity |
managementApi.containerPort | int | 1281 | cluster service target port on the container |
managementApi.dnsNames | list | [] | besides advertisedHost, add these DNS SANs to the web identity and any mgmt api ingresses |
managementApi.ingress.annotations | object | {} | ingress annotations, e.g., to configure ingress-nginx |
managementApi.ingress.enabled | bool | false | create a TLS-passthrough ingress for the client API's ClusterIP service |
managementApi.ingress.ingressClassName | string | "" | ingress class name, e.g., "nginx" |
managementApi.ingress.labels | object | {} | ingress labels |
managementApi.ingress.tls | object | {} | deprecated: tls passthrough is required |
managementApi.service.enabled | bool | false | create a cluster service for the deployment |
managementApi.service.type | string | "ClusterIP" | expose the service as a ClusterIP, NodePort, or LoadBalancer |
managementApi.traefikTcpRoute.enabled | bool | false | enable Traefik IngressRouteTCP |
managementApi.traefikTcpRoute.entryPoints | list | ["websecure"] | IngressRouteTCP entrypoints |
managementApi.traefikTcpRoute.labels | object | {} | IngressRouteTCP labels |
network.createCircuitRetries | int | 2 | createCircuitRetries controls the number of retries that will be attempted to create a path (and terminate it) for new circuits. |
network.cycleSeconds | int | 15 | Defines the period that the controller re-evaluates the performance of all of the circuits running on the network. |
network.initialLinkLatency | string | "65s" | Sets the latency of link when it's first created. Will be overwritten as soon as latency from the link is actually reported from the routers. Defaults to 65 seconds. |
network.minRouterCost | int | 10 | Sets router minimum cost. Defaults to 10 |
network.pendingLinkTimeoutSeconds | int | 10 | pendingLinkTimeoutSeconds controls how long we'll wait before creating a new link between routers where there isn't an established link, but a link request has been sent |
network.routeTimeoutSeconds | int | 10 | routeTimeoutSeconds controls the number of seconds the controller will wait for a route attempt to succeed. |
network.routerConnectChurnLimit | string | "1m" | Sets how often a new control channel connection can take over for a router with an existing control channel connection Defaults to 1 minute |
network.smart.rerouteCap | int | 4 | Defines the hard upper limit of underperforming circuits that are candidates to be re-routed. If smart routing detects 100 circuits that are underperforming, and smart.rerouteCap is set to 1 , and smart.rerouteFraction is set to 0.02 , then the upper limit of circuits that will be re-routed in this cycleSeconds period will be limited to 1. |
network.smart.rerouteFraction | float | 0.02 | Defines the fractional upper limit of underperforming circuits that are candidates to be re-routed. If smart routing detects 100 circuits that are underperforming, and smart.rerouteFraction is set to 0.02 , then the upper limit of circuits that will be re-routed in this cycleSeconds period will be limited to 2 (2% of 100). |
nodeSelector | object | {} | deployment template spec node selector |
persistence.VolumeName | string | "" | PVC volume name |
persistence.accessMode | string | "ReadWriteOnce" | PVC access mode: ReadWriteOnce (concurrent mounts not allowed), ReadWriteMany (concurrent allowed) |
persistence.annotations | object | {} | annotations for the PVC |
persistence.enabled | bool | true | required: place a storage claim for the BoltDB persistent volume |
persistence.existingClaim | string | "" | A manually managed Persistent Volume and Claim Requires persistence.enabled=true. If defined, PVC must be created manually before volume will be bound. |
persistence.size | string | "2Gi" | 2GiB is enough for tens of thousands of entities, but feel free to make it larger |
persistence.storageClass | string | "" | Storage class of PV to bind. By default it looks for the default storage class. If the PV uses a different storage class, specify that here. |
podAnnotations | object | {} | annotations to apply to all pods deployed by this chart |
podSecurityContext | object | {"fsGroup":2171} | deployment template spec security context |
podSecurityContext.fsGroup | int | 2171 | the GID of the group that should own any files created by the container, especially the BoltDB file |
prometheus.advertisedHost | string | "" | DNS name to advertise in place of the default internal cluster name built from the Helm release name |
prometheus.advertisedPort | int | 443 | cluster service, node port, load balancer, and ingress port |
prometheus.containerPort | int | 9090 | cluster service target port on the container |
prometheus.service.annotations | object | {} | |
prometheus.service.enabled | bool | false | create a cluster service for the deployment |
prometheus.service.labels | object | {"app":"prometheus"} | extra labels for matching only this service, ie. serviceMonitor |
prometheus.service.type | string | "ClusterIP" | expose the service as a ClusterIP, NodePort, or LoadBalancer |
prometheus.serviceMonitor.annotations | object | {} | ServiceMonitor annotations |
prometheus.serviceMonitor.enabled | bool | true | If enabled, and prometheus service is enabled, ServiceMonitor resources for Prometheus Operator are created |
prometheus.serviceMonitor.interval | string | nil | ServiceMonitor scrape interval |
prometheus.serviceMonitor.labels | object | {} | Additional ServiceMonitor labels |
prometheus.serviceMonitor.metricRelabelings | list | [] | ServiceMonitor relabel configs to apply to samples as the last step before ingestion (reference) |
prometheus.serviceMonitor.namespace | string | nil | Alternative namespace for ServiceMonitor resources |
prometheus.serviceMonitor.namespaceSelector | object | {} | Namespace selector for ServiceMonitor resources |
prometheus.serviceMonitor.relabelings | list | [] | ServiceMonitor relabel configs to apply to samples before scraping (defines relabel_configs ; reference) |
prometheus.serviceMonitor.scheme | string | "https" | ServiceMonitor will use http by default, but you can pick https as well |
prometheus.serviceMonitor.scrapeTimeout | string | nil | ServiceMonitor scrape timeout in Go duration format (e.g. 15s) |
prometheus.serviceMonitor.targetLabels | list | [] | ServiceMonitor will add labels from the service to the Prometheus metric (reference) |
prometheus.serviceMonitor.tlsConfig | object | {"insecureSkipVerify":true} | ServiceMonitor will use these tlsConfig settings to make the health check requests |
prometheus.serviceMonitor.tlsConfig.insecureSkipVerify | bool | true | set TLS skip verify, because the SAN will not match with the pod IP |
resources | object | {} | deployment container resources |
securityContext | object | {} | deployment container security context |
spireAgent.enabled | bool | false | if you are running a container with the spire-agent binary installed then this will allow you to add the hostpath necessary for connecting to the spire socket |
spireAgent.spireSocketMnt | string | "/run/spire/sockets" | file path of the spire socket mount |
tolerations | list | [] | deployment template spec tolerations |
trustDomain | string | "" | permanent SPIFFE ID to use for this controller's trust domain (default: random, fixed for the life of the chart release) |
useCustomAdminSecret | bool | false | allow for using a custom admin secret, which has to be created beforehand if enabled, the admin secret will not be generated by this Helm chart |
webBindingPki.altServerCerts | list | [] | |
webBindingPki.alternativeIssuer | object | {} | obtain the web identity from an existing issuer instead of generating a new PKI |
webBindingPki.enabled | bool | true | generate a separate PKI root of trust for web bindings, i.e., client, management, and prometheus APIs |
TODO's
- High availability clustered mode
- Deploy Prometheus scraper configuration when
prometheus.enabled = true
Alternative Web Server Certificates
The purpose of the alt_server_certs
feature is to bind a publicly trusted server certificate to the controller's web listener. This is useful for publishing the controller's client API with a different DNS name for BrowZer and console clients that must verify the controller's identity with their OS trusted root store.
Request an alternative server certificate from a cert-manager issuer
The most automatic way to bind an alt cert is the certManager mode provided by this chart. This example implies you have separately created a cert-manager ClusterIssuer named "cloudflare-dns01-issuer" that is able to obtain a certificate for the specified DNS name. If publishing the client API's alternative DNS name as a separate Ingress, you may reference that advertised host when requesting the alternative server certificate as shown here with an inline template to ensure they match.
clientApi:
advertisedHost: edge.ziti.example.com
ingress:
enabled: true
ingressClassName: nginx
annotations:
kubernetes.io/ingress.allow-http: "false"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
service:
enabled: true
type: ClusterIP
altDnsNames:
- "alt-edge.ziti.example.com"
webBindingPki:
enabled: true
altServerCerts:
- mode: certManager
secretName: my-alt-server-cert
dnsNames:
- "{{ .Values.clientApi.altDnsNames[0] }}"
issuerRef:
group: cert-manager.io
kind: ClusterIssuer
name: cloudflare-dns01-issuer
mountPath: /etc/ziti/alt-server-cert
Use an alternative certificate and key from a tls secret
The alternative server certificate and key may also be provided from a Kubernetes TLS secret. Declare the tls secret in the additionalVolumes section and reference it in the altServerCerts section.
additionalVolumes:
- name: my-alt-server-cert
volumeType: secret
mountPath: /etc/ziti/my-alt-server-cert
secretName: my-alt-server-cert
webBindingPki:
altServerCerts:
- mode: secret
secretName: my-alt-server-cert