Skip to main content
Star us on GitHub Star

Tunneling to NGINX Upstreams

Ziti's NGINX module works like a tunneler for hosting Ziti services, i.e. as a reverse proxy. This means that NGINX listens for particular, pre-configured Ziti services on the Ziti overlay (not on an IP address), and forwards the incoming requests to the NGINX upstream specified for that service in the NGINX config file.

We'll run the module in Kubernetes as an example, but the module works anywhere NGINX does.

Prerequisites


Architecture

  • Before Before

  • After After


Create a Network


Create Identities

Follow the guide @Create Identities for the ngx-ziti-module and windows client.

tip

client name = client-nginx with Attribute: #clients, server module name = server-nginx with Attribute: #servers

Download jwt files and enroll identities.


Build Docker Image

Currently, configmaps have a binary file limit of 1MB and the size of the ngx-ziti-module is around 2~3MBs. Therefore, it can not be uploaded to the existing nginx image. One needs to build a custom docker image and add it during the build process.

tip

One way to update the build is to add to the Dockerfile (build/Dockerfile) this snippet of code under the common section, i.e. FROM ${BUILD_OS} as common

# copy ziti module
COPY ./ngx_ziti_module.so /usr/lib/nginx/modules

Also, need to add the following package libc6 to the debian build in the same Dockerfile, i.e. FROM nginx:1.23.3 AS debian. Did not try the alpine build but the assumption is that would be the same.

&& apt-get install --no-install-recommends --no-install-suggests -y libcap2-bin libc6 \

Lastly, if you don't want the image name to have a postfix ofSNAPSHOT... , comment it out in Makefile.

VERSION = $(GIT_TAG)##-SNAPSHOT-$(GIT_COMMIT_SHORT)

Once the image is built, upload it to your container registry. You will need it during customization of the nginx ingress controller deployment to the AKS Cluster.

note

if you don't have time to build your own, can use our test image based on debian 11 and nginx v1.23.3

set {
name = "controller.image.repository"
value = "docker.io/elblag91/ziti-nginx-ingress"
}

Deploy AKS Cluster

Deploy the AKS Infrustructure in Azure using terraform. The terraform code is located @OpenZiti Nginx Ingress Terraform Repo. The following resources will be created, when the plan is applied.

  • Virtual Network
  • AKS Private Cluster with Kubenet CNI and Nginx Ingress Controller

Set a few environmental variables for Azure Authentication and Authorization.

Env Vars

export ARM_SUBSCRIPTION_ID = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"

export ARM_CLIENT_ID = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"

export ARM_CLIENT_SECRET = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"

export ARM_TENANT_ID = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"

Run terraform plan.

Steps
git clone https://github.com/dariuszSki/openziti-nginx-ingress-terraform.git
cd openziti-nginx-ingress-terraform/tf-provider
terraform init
terraform plan -var include_aks_nginx=true -out aks
terraform apply "aks"

Once completed, grab cluster_public_fqdn under outputs as shown in the example below.

cluster_name = "akssandeastus"
cluster_private_fqdn = ""
cluster_public_fqdn = "akssand-2ift1yqr.hcp.eastus.azmk8s.io"

Configuration Code Snippets

If you are using your own deployment method, here are some configuration details from helm chart that need to be passed to the nginx ingress controller deployment.

  • Ziti Nginx Module Identity
nginx_ziti_identity = "${file("./server-nginx.json")}"
resource "kubernetes_secret" "ziti-identity" {
metadata {
name = "nginx-ziti-identity"
}
data = {
"nginx-ziti-identity" = var.nginx_ziti_identity
}
type = "Opaque"
}
  • Image Reference
set {
name = "controller.image.repository"
value = "docker.io/elblag91/ziti-nginx-ingress"
}

set {
name = "controller.image.tag"
value = "3.0.2"
}
  • Configuration File - Main section block
    info

    Services are commented out until they are created. Then, terraform plan can be re-run to enable them.

:::

controller:
service:
create: false
config:
entries:
main-snippets: |
load_module modules/ngx_ziti_module.so;

thread_pool ngx_ziti_tp threads=32 max_queue=65536;

#ziti identity1 {
# identity_file /var/run/secrets/openziti.io/${kubernetes_secret.ziti-identity.metadata[0].name};

# bind k8s-api {
# upstream kubernetes.default:443;
# }
#}
  • Volume section and path to secrets. Added openziti.io folder.
  volumes:
- name: "ziti-nginx-files"
projected:
defaultMode: 420
sources:
- secret:
name: ${kubernetes_secret.ziti-identity.metadata[0].name}
items:
- key: ${kubernetes_secret.ziti-identity.metadata[0].name}
path: ${kubernetes_secret.ziti-identity.metadata[0].name}
volumeMounts:
- mountPath: /var/run/secrets/openziti.io
name: ziti-nginx-files
readOnly: true


Service Configurations

  • Create configs
ziti edge create config k8s-api-intercept.v1 intercept.v1 "{\"protocols\": [\"tcp\"], \"addresses\": [\"akssand-2ift1yqr.hcp.eastus.azmk8s.io\"],\"portRanges\": [{\"low\": 443,\"high\": 443}]}"
  • Create Service
ziti edge create service k8s-api --configs "k8s-api-intercept.v1" --role-attributes "service-nginx"

  • Create Service Bind Policy
ziti edge create service-policy k8s-api-bind Bind --semantic "AnyOf" --identity-roles "#servers" --service-roles "#service-nginx"

  • Create Service Dial Policy
ziti edge create service-policy k8s-api-dial Dial --semantic "AnyOf" --identity-roles "#clients" --service-roles "#service-nginx"

Enable Ziti Service in Ingress Controller

Uncomment ziti identity1 block in resource.helm_release.nginx-ingress

config:
entries:
main-snippets: |
load_module modules/ngx_ziti_module.so;

thread_pool ngx_ziti_tp threads=32 max_queue=65536;

ziti identity1 {
identity_file /var/run/secrets/openziti.io/${kubernetes_secret.ziti-identity.metadata[0].name};

bind k8s-api {
upstream kubernetes.default:443;
}
}
caution

Need to disable ZDE for this network before the next step, so the nginx updates will not get intercepted while the service is not ready yet.

  • Re-run terraform
terraform plan  -var include_aks_nginx=true  -out aks
terraform apply "aks"

Let's test our service

tip

If the terraform was run, the kube-config file was created in the tf root directory. One can also use az cli to get the kube config downloaded.

# Configure your local kube configuration file using azure cli
az login # if not already logged in
# Windows WSL
export RG_NAME = 'resource group name'
export ARM_SUBSCRIPTION_ID =  'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'
az aks get-credentials --resource-group $RG_NAME --name {cluster_name} --subscription $ARM_SUBSCRIPTION_ID
  • Check context in the kubectl config file
kubectl config  get-contexts

Expected Output

CURRENT   NAME            CLUSTER         AUTHINFO                                           NAMESPACE
* akssandeastus akssandeastus clusterUser_nginx_module_rg_eastus_akssandeastus
  • Let's check the status of nodes in the cluster.
kubectl get nodes

Expected Output

NAME                                STATUS   ROLES   AGE    VERSION
aks-agentpool-20887740-vmss000000 Ready agent 151m v1.24.9
aks-agentpool-20887740-vmss000001 Ready agent 151m v1.24.9
  • List cluster info
kubectl cluster-info

Expected Output

Kubernetes control plane is running at https://akssand-2ift1yqr.hcp.eastus.azmk8s.io:443
CoreDNS is running at https://akssand-2ift1yqr.hcp.eastus.azmk8s.io:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://akssand-2ift1yqr.hcp.eastus.azmk8s.io:443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
  • List pods
kubectl get pods --all-namespaces

Expected Output

NAMESPACE     NAME                                           READY   STATUS    RESTARTS   AGE
default nginx-ingress-nginx-ingress-7ffd564557-ngt69 1/1 Running 0 134m
kube-system azure-ip-masq-agent-9scx6 1/1 Running 0 166m
kube-system azure-ip-masq-agent-xps5g 1/1 Running 0 166m
kube-system cloud-node-manager-8q7fk 1/1 Running 0 166m
kube-system cloud-node-manager-czg95 1/1 Running 0 166m
kube-system coredns-59b6bf8b4f-8t5kj 1/1 Running 0 165m
kube-system coredns-59b6bf8b4f-rc6b8 1/1 Running 0 167m
kube-system coredns-autoscaler-5655d66f64-pbjhr 1/1 Running 0 167m
kube-system csi-azuredisk-node-2mp98 3/3 Running 0 166m
kube-system csi-azuredisk-node-tq6x6 3/3 Running 0 166m
kube-system csi-azurefile-node-9q4f5 3/3 Running 0 166m
kube-system csi-azurefile-node-bzr4z 3/3 Running 0 166m
kube-system konnectivity-agent-86cdc66d6b-tlhkw 1/1 Running 0 128m
kube-system konnectivity-agent-86cdc66d6b-ttmq6 1/1 Running 0 128m
kube-system kube-proxy-q48qv 1/1 Running 0 166m
kube-system kube-proxy-t2dhr 1/1 Running 0 166m
kube-system metrics-server-8655f897d8-mr2bq 2/2 Running 0 165m
kube-system metrics-server-8655f897d8-tnpzt 2/2 Running 0 165m
  • List services
kubectl get services --all-namespaces

Expected Output

NAMESPACE     NAME             TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
default kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 169m
kube-system kube-dns ClusterIP 10.0.0.10 <none> 53/UDP,53/TCP 169m
kube-system metrics-server ClusterIP 10.0.218.224 <none> 443/TCP 169m
note

At this point the public access is still available even though the API Kubectl queries are routed through Ziti. You can disable ZDE client and test that.

Block IPs to API Server

Pass the following variable to only allow 192.168.1.1/32 source IP to essentially disable Public Access.

terraform plan  -var include_aks_nginx=true -var authorized_source_ip_list=[\"192.168.1.1/32\"] -out aks
terraform apply "aks"

Retest with ZDE enabled and disabled for this network.

Clean up

note

Run terraform to open up the AKS API to public before deleting AKS resources, so you dont get locked out.

terraform plan  -var include_aks_nginx=true -var authorized_source_ip_list=[\"0.0.0.0/0\"] -out aks
terraform apply "aks"

Delete all resources

terraform plan  --destroy -var include_aks_nginx=true  -out aks