Skip to main content
Star us on GitHub Star

Kubernetes Node Daemonset

Version: 1.2.0 Type: application AppVersion: 1.5.12

Dial OpenZiti services with a tunneler daemonset

Homepage: <https://openziti.io>

Source Code

Requirements

Kubernetes: >= 1.20.0-0

Overview

You may use this chart to reach services node-wide via your Ziti network via DNS. For example, if you create a repository or container registry Ziti service, and your cluster has no internet access, you can reach those repositories or container registries via Ziti services.

NOTE: For one node kubernetes approaches like k3s, this works out-of-the-box and you can extend your coredns configuration to forward to the Ziti DNS IP, as you can see here. For multinode kubernetes installations, where your cluster DNS could run on a different node, you need to install the node-local-dns feature, which secures that the Ziti DNS name will be resolved locally, on the very same tunneler, as Ziti Intercept IPs can change from node to node. See this helm chart for a possible implementation.

How this Chart Works

This chart deploys a pod running ziti-edge-tunnel, the OpenZiti Linux tunneler, in transparent proxy mode with DNS nameserver. The chart uses container image docker.io/openziti/ziti-edge-tunnel which runs ziti-edge-tunnel run.

The enrolled Ziti identity JSON is persisted in a volume, and the chart will migrate the identity from a secret to the volume if the legacy secret exists.

Installation

helm repo add openziti https://docs.openziti.io/helm-charts/

After adding the charts repo to Helm then you may enroll the identity and install the chart. You may supply a Ziti identity JSON file when you install the chart. This approach enables you to use any option available to the ziti-edge-tunnel enroll command.

ziti-edge-tunnel enroll --jwt /tmp/k8s-tunneler.jwt --identity /tmp/k8s-tunneler.json
helm install ziti-edge-tunnel openziti/ziti-edge-tunnel --set-file zitiIdentity=/tmp/k8s-tunneler.json

Alternatively, you may supply the JWT directly to the chart. In this case, a private key will be generated on first run and the identity will be enrolled.

helm install ziti-edge-tunnel openziti/ziti-edge-tunnel --set-file zitiEnrollToken=/tmp/k8s-tunneler.jwt

Installation using a existing secret

Warning: this approach does not allow the tunneler to autonomously renew its identity certificate, so you must renew the identity certificate out of band and supply it as an existing secret.

Create the secret:

kubectl create secret generic k8s-tunneler-identity --from-file=persisted-identity=k8s-tunneler.json

Deploy the Helm chart, referring to the existing secret:

helm install ziti-edge-tunnel openziti/ziti-edge-tunnel --set secret.existingSecretName=k8s-tunneler-identity

If desired, change the key name persisted-identity with --set secret.keyName=myKeyName.

Configure CoreDNS

If you want to resolve your Ziti domain inside the pods, you need to customize CoreDNS. See Official docs.

Multinode example

Customise ConfigMap that you apply for node-local-dns by appending the ziti specific domain and the upstream DNS server of ziti-edge-tunnel,

apiVersion: v1
kind: ConfigMap
metadata:
name: node-local-dns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: Reconcile
data:
Corefile: |
your.ziti.domain:53 {
log
errors
reload
loop
bind __PILLAR__LOCAL__DNS__ __PILLAR__DNS__SERVER__
forward . 100.64.0.2
prometheus :9253
}
__PILLAR__DNS__DOMAIN__:53 {
errors
reload
loop
bind __PILLAR__LOCAL__DNS__ __PILLAR__DNS__SERVER__
forward . 100.64.0.2
prometheus :9253
health __PILLAR__LOCAL__DNS__:8080
}
in-addr.arpa:53 {
errors
cache 30
reload
loop
bind __PILLAR__LOCAL__DNS__ __PILLAR__DNS__SERVER__
forward . __PILLAR__CLUSTER__DNS__ {
force_tcp
}
prometheus :9253
}
ip6.arpa:53 {
errors
cache 30
reload
loop
bind __PILLAR__LOCAL__DNS__ __PILLAR__DNS__SERVER__
forward . __PILLAR__CLUSTER__DNS__ {
force_tcp
}
prometheus :9253
}
.:53 {
errors
cache 30
reload
loop
bind __PILLAR__LOCAL__DNS__ __PILLAR__DNS__SERVER__
forward . __PILLAR__UPSTREAM__SERVERS__
prometheus :9253
}

Refer to the documentation of NodeLocal DNSCache on how to replace the values starting with two underscores and then apply it by,

kubectl apply -f nodelocaldns.yaml

One node example

Customize CoreDNS configuration,

kubectl -n kube-system apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns-custom
namespace: kube-system
data:
ziti.server: |
your.ziti.domain {
forward . 100.64.0.2
}
EOF

Reload CoreDNS config,

kubectl rollout restart -n kube-system deployment/coredns

Air gapped installations

For air gapped clusters, which mirrors their registries over this OpenZiti tunneler, the upgrade will present the chicken-and-egg problem, and the DaemonSet will stay at the ImagePullBackOff state for ever. To work this problem around, you can install the prepull-daemonset Helm chart, which can pull the new ziti-edge-tunnel image Version needed in beforehand. Once the image is present on every node, you can proceed to upgrade the tunneler without problems.

Values Reference

KeyTypeDefaultDescription
additionalVolumeslist[]additional volumes to mount to ziti-edge-tunnel container
affinityobject{}
dnsPolicystring"ClusterFirstWithHostNet"
fullnameOverridestring""
hostNetworkbooltrue
image.argslist[]
image.commandlist[]
image.pullPolicystring"IfNotPresent"
image.registrystring"docker.io"
image.repositorystring"openziti/ziti-edge-tunnel"
image.tagstring""
imagePullSecretslist[]
livenessProbe.exec.command[0]string"/bin/bash"
livenessProbe.exec.command[1]string"-c"
livenessProbe.exec.command[2]string`"if (ziti-edge-tunnel tunnel_statusjq '.Success'); then true; else false; fi"`
livenessProbe.failureThresholdint3
livenessProbe.initialDelaySecondsint180
livenessProbe.periodSecondsint60
livenessProbe.successThresholdint1
livenessProbe.timeoutSecondsint10
log.timeFormatstring"utc"Set log time format, if set to "utc", then in UTC format, otherwise in milliseconds since the program has started.
log.tlsUVLevelint3TLSUV log level, from 0 to 6 (see README.md Reference)
log.zitiLevelint3Ziti log level, from 0 to 6 (see README.md Reference)
nameOverridestring""
nodeSelectorobject{}constrain worker nodes where the ziti-edge-tunnel pod can be scheduled
podAnnotationsobject{}
podSecurityContextobject{}
portslist[]
resourcesobject{}
secretobject{}
securityContext.privilegedbooltrue
serviceAccount.annotationsobject{}Annotations to add to the service account
serviceAccount.createbooltrueSpecifies whether a service account should be created
serviceAccount.namestring""The name of the service account to use. If not set and create is true, a name is generated using the fullname template
spireAgent.enabledboolfalseif you are running a container with the spire-agent binary installed then this will allow you to add the hostpath necessary for connecting to the spire socket
spireAgent.spireSocketMntstring"/run/spire/sockets"file path of the spire socket mount
systemDBus.enabledbooltrueenable D-Bus socket connection
systemDBus.systemDBusSocketMntstring"/var/run/dbus/system_bus_socket"file path of the System D-Bus socket mount
tolerationslist[]
zitiEnrollTokenstring""JWT to enroll a new identity and write in the PVC
zitiIdentitystring""JSON of an enrolled identity to write in the PVC
helm upgrade {release} {source dir}

Log Level Reference

OpenZiti tunneler and TLSUV log levels are represented by integers, as follows,

Log LevelValue
NONE0
ERR1
WARN2
INFO (default)3
DEBUG4
VERBOSE5
TRACE6