Skip to main content
Star us on GitHub Star

· 13 min read
Clint Dovholuk

The previous post showed how to use a zero trust overlay like Ziti for transferring files by zitifying scp. Next up in the list of zitifications is kubectl. Kubernetes is a container orchestration system. Its purpose is to deploy, scale, and manage the deployment containers. Containers are self-contained, pre-built images of software generally with a singular purpose. Developers often like using containers for various reasons. One major reason developers like containers is because it simplifies the deployment of the solutions they are developing. This is where Kubernetes starts to come into focus.

In this article we'll use a cloud provider to create a Kubernetes cluster to use. I'm using Oracle OKE in this article but there are numerous Kubernetes providers and any of them will work but clearly the commands I'm running here are Oracle specific. Once created we will then access the cluster three ways:

  1. Via the public Kubernetes API secured via mTLS. This is the default, out-of-the-box mechanism provided by Kubernetes.
  2. Via a tunneling app. I run Windows, so I'll use the Ziti Desktop Edge for Windows.
  3. Via a zitified kubectl. Here's where we'll get to see the power of a truly zitified application. We'll be able to access our cluster extremely securely using the Ziti overlay network without installing an additional agent. Once access to the cluster comes entirely from the Ziti Network, we will be able to turn public access to the Kubernetes management API off entirely!

About Kubernetes

If you are not already familiar with Kubernetes then it's probably best for you to stop reading and learn a little about it first. Though this article only expects you to understand the most rudimentary of commands, it won't teach you enough about Kubernetes to understand the what's and why's. Lots of documentation on this topic already exist and are just a search away in your search engine of choice.

Kubernetes itself is not a container engine, it's an orchestrator. This means that Kubernetes knows how to interface with container engines to perform deployments and management of workloads on the behalf of operators. This provides people with a common abstraction to use when doing this management and deployment. Interacting with the Kubernetes API is made easy by using the command-line tool: kubectl.

kubectl provides numerous commands and utilities to interact with your Kubernetes cluster. It does this by creating REST requests to a well-known endpoint. This endpoint is a highly-valuable target as it is the entry-point to the cluster. Plenty of blogs exist already on the internet addressing how to secure this endpoint but in this post we'll take it one step further than ever before by removing the Kubernetes control plane from the internet entirely. Following that we will even go one step further by replacing the existing kubectl command with a zero-trust implementation leveraging the ziti golang sdk.

If you'd prefer to watch a video that goes over the same content contained in the rest of this article you can go ahead and click here to watch.

Secure Kubernetes


Setup

Below is an overview of the Ziti Network I created for this article. On the left you can see that the client, my computer, runs Windows 10. Inside Windows 10 I run linux and bash using Ubuntu via Windows Subsystem For Linux (WSL). If you run Windows and don't have WSL installed I would encourage you to install and learn it! In my bash shell I have downloaded the linux version of kubectl created by combining the Ziti Golang SDK into it. You can grab it from this link if you like or go check out the code on GitHub and build it yourself! :)

Solution Overview

private-kubernetes.svg

Basic Ziti Setup

To accomplish our stated goals, we will need not only an existing Ziti Network but we'll also have to configure that network accordingly. Here's a list of the components necessary to deliver Kubernetes with our zero-trust network:

  1. A configuration for the Bind side of the service. This informs the identity within Kubernetes where to send traffic and how.
  2. A configuration for the Dial side of the service. This is strictly only necessary for tunneling apps. In this example, for the Ziti Desktop Edge for Windows and specifies what host and port will be intercepted on the machine running the stock kubectl. for Windows.
  3. The service itself which ties our polices mentioned above together.
  4. A Bind service-policy which specifies which identities are allowed to act as a "host" for the service (meaning an identity to send traffic to which knows where and how to offload that traffic). In our example this will be the ziti-edge-tunnel running in a Kubernetes pod.
  5. A Dial service-policy which specifies the identities allowed to access the service. This will be the identity using kubectl.
  6. Create two identities - one for the Bind side of the service (deployed within the Kubernetes cluster) and one for the Dial or client side.

Here are some example commands using the ziti cli which illustrate how to create these services. Some things of note worth mentioning. I'm setting a variable to make my configuration easier. I reuse these code blocks a lot and by extracting some variables it makes it easy for me to delete/recreate services. First I set the service_name variable. I use this variable in all the names of the Ziti objects I create just to make it more clear and obvious if I have to look back at my configuration again.

Since I'm going to be accessing my Kubernetes API which I've deployed using the Oracle cloud I chose to use k8s.oci as my service name. When deployed by a cloud provider, the Kubernetes API is generated or updated with numerous SANS and IP address I can choose from to represent the Dial side which will be intercepted by the Ziti Desktop Edge for Windows. The Oracle cloud console informs me that the private IP of 10.0.0.6 was assigned to my cluster when I click on the 'Access Cluster' button which is why I chose to use that value below. I could have chosen to use any of the DNS names provided by OKE. There are at least five I could choose from, all visible as SANS on the cert that the server returns: kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster , kubernetes.default.svc.cluster.local. I chose the IP since it's pretty obvious that it's an internal IP, not on my local network. Also worth pointing out is that I'm mapping the port as well, changing it from the port that the server provides, 6443, to the common HTTPS port of 443 for the local intercept. With zitified kubectl we don't even need these intercepts, but we'll keep it here so that we can use the unmodified kubectl as well. Finally, these commands are all executed inside a bash shell since I'm using WSL.

Example Ziti CLI commands

# the name of the service
service_name=k8s.oci
# the name of the identity you'd like to see on the kubectl client
the_user_identity="${service_name}".client
# the name of the identity deployed into the kubernetes cluster
the_kubernetes_identity="${service_name}".private

ziti edge create config "${service_name}"-host.v1 host.v1 \
'{"protocol":"tcp", "address":"10.0.0.6","port":6443 }'

ziti edge create config "${service_name}"-client-config intercept.v1 \
'{"protocols":["tcp"],"addresses":["10.0.0.6","kubernetes"], "portRanges":[{"low":443, "high":443}]}'

ziti edge create service \
"${service_name}" \
--configs "${service_name}"-client-config,"${service_name}"-host.v1

ziti edge create service-policy "${service_name}"-binding Bind \
--service-roles '@'"${service_name}" \
--identity-roles '#'"${service_name}"'ServerEndpoints'

ziti edge create service-policy "${service_name}"-dialing Dial \
--service-roles '@'"${service_name}" \
--identity-roles '#'"${service_name}"'ClientEndpoints'

ziti edge create identity device "${the_kubernetes_identity}" \
-a "${service_name}"ServerEndpoints \
-o "${the_kubernetes_identity}".jwt

ziti edge create identity device "${the_user_identity}" \
-a "${service_name}"ClientEndpoints \
-o "${the_user_identity}".jwt

Kubernetes Config Files

Once we have established the pieces of the Ziti Network, we'll want to get the Kubernetes config files from OKE so that we can test access, make sure the cluster works etc. Oracle provides a CLI command which makes it pretty easy to get those config files called oci. As of this writing - the guide from Oracle is here. Once oci is installed and configured the Oracle cloud gives you very easy commands to run which will generate two files. One file will be for accessing the Kubernetes API through the public endpoint. The other will get you the file for private access. We're going to want both since we're on a journey here from "public API endpoint" to tunneling-app-based access, to the final stage of app-embedded zero-trust directly into kubeztl.

Getting the Kubernetes Config Files

Notice that we are changing the file location output by these commands and they are being output as two separate Kubernetes config files. If you prefer to merge them all into one big config file and change contexts - feel free. I left them as separate files here because it provides a very clear separation as to which config is being used or modified.

# Get this value directly from Oracle
oci_cluster_id="put-your-cluster-id-here"

oci ce cluster create-kubeconfig \
--cluster-id ${oci_cluster_id} \
--file /tmp/oci/config.oci.public \
--region us-ashburn-1 \
--token-version 2.0.0 \
--kube-endpoint PUBLIC_ENDPOINT
chmod 600 /tmp/oci/config.oci.public

oci ce cluster create-kubeconfig \
--cluster-id ${oci_cluster_id} \
--file /tmp/oci/config.oci.private \
--region us-ashburn-1 \
--token-version 2.0.0 \
--kube-endpoint PRIVATE_ENDPOINT
chmod 600 /tmp/oci/config.oci.private

Connecting the Pieces

At this point we should have all the pieces in place so that we can start putting them together to test the overall solution. In this section we'll access our public Kubernetes api to make sure it works. Then we'll install Ziti into the Kubernetes cluster and verify private access works. Finally we'll disable public access entirely and use the zitified kubeztl command to access the cluster with true, app-embedded zero-trust binary.

Testing the Public API

This step is very straight-forward for anyone who's used Kubernetes before. Issue the following commands, making sure the path is correct for your public Kubernetes config file, and verify Kubernetes works as expected.

export KUBECONFIG=/tmp/oci/config.oci.public
kubectl get pods -v6 --request-timeout='5s'
I1019 13:57:31.910962 3211 loader.go:372] Config loaded from file: /tmp/oci/config.oci.public
I1019 13:57:33.676047 3211 round_trippers.go:454] GET https://150.230.150.0:6443/api/v1/namespaces/default/pods?limit=500&timeout=5s 200 OK in 1752 milliseconds
NAME READY STATUS RESTARTS AGE

If your output looks something similar to the above (with or without the pods you expect to see) then great! That means your Kubernetes cluster is indeed up and running. Let's move on!

Deploying Ziti to Kubernetes

Next we'll grab a few lines from the excellent guide NetFoundry put out for integrating with Kubernetes. There's a section in that guide for installing Ziti with Helm. This comes down to just these steps:

  1. install the helm CLI tool using this guide
  2. add the NetFoundry helm repo: helm repo add netfoundry https://netfoundry.github.io/charts/
  3. locate the jwt file for the Kubernetes identity. If you followed the steps above the file will be named: "${the_kubernetes_identity}".jwt (make sure you replace the variable with the correct value)
  4. use the jwt to add Ziti: helm install ziti-host netfoundry/ziti-host --set-file enrollmentToken="${the_kubernetes_identity}".jwt (again make sure you replace the variable name) If you need to, make sure you create a persistent volume. The ziti pod requires storage to store a secret.
apiVersion: v1
kind: PersistentVolume
metadata:
name: ziti-host-pv
labels:
type: local
spec:
storageClassName: oci
capacity:
storage: 100Mi
accessModes:
- ReadWriteMany
hostPath:
path: "/netfoundry"

Add/Enroll the Client Identity

Now consume the one time token (the jwt file) to enroll and create a client-side identity using the Ziti Desktop Edge for Windows (or MacOS or via the ziti-edge-tunnel if you prefer). Once you can see the identity in your tunneling app, you should be able to use the private kubernetes config file to access the same exact cluster. Remember though, we have mapped the port on the client side to use 443. That means you'll need to update your config file and change 6443 --> 443. Now when you run get pods you'll see the ziti-host pod deployed:

export KUBECONFIG=/tmp/oci/config.oci.private
kubectl get pods
NAME READY STATUS RESTARTS AGE
ziti-host-976b84c66-kr4bc 1/1 Running 0 90m

The Big Finale - Zitified kubectl

If you have made it this far, you've seen us access the Kubernetes API via the public IP. We've even accessed it via the private IP (which btw - is pretty cool in my opinion!). Now we're going to download the zitified kubectl command, turn off the public IP and even turn off the locally running tunneling app and still access the API!

  1. Disable the cluster's public IP address in OKE (go to the cluster in Oracle Cloud, click Edit and remove the public IP and click save)

  2. Turn off the Ziti Desktop Edge for Windows

  3. Build kubeztl from the GitHub repo

  4. Use kubeztl to get pods!

    ./kubeztl -zConfig ./id.json -service k8s.oci get pods
    NAME READY STATUS RESTARTS AGE
    ziti-host-976b84c66-kr4bc 1/1 Running 0 101m

Modifying KUBECONFIG

The kubeztl command has also been modified to allow you to add the service name and config file directly into the file itself. This is convenient since you will not need to supply the ziti identity file, nor will you need to specify which service to use. Modifying the file is straight-forward. Open the config file, find the context listed under the contexts root and add two rows as shown here.

contexts
- context:
cluster: cluster-cjw4arxuolq
user: user-cjw4arxuolq
zConfig: /tmp/oci/k8s.id.json
service: k8s.oci

Once done - you can now simply use the context the same way you have always - kubeztl get pods!

./kubeztl get pods
NAME READY STATUS RESTARTS AGE
ziti-host-976b84c66-kr4bc 1/1 Running 0 114m

Conclusion

We've seen in this post how you can not only secure your Kubernetes API with the normal Kubernetes mechanisms. You can also take your Kubernetes API off the internet ENTIRELY. No need to deploy and maintain a special bastian node. Now by having a secure, zero-trust overlay in place you can safely and securely access your Kubernetes API without the fear of that public, high-value API getting attacked.

But Wait, There's More

Once you've deployed Ziti into the Kubernetes cluster you're not done there. Now you can also use Ziti to span cloud networks. You can use it to easily link private data centers or other private Kubernetes clusters all into one secure, zero-trust overlay network! Use Ziti to expose workloads that are TRULY private! In future articles we might explore how we can bring Ziti to bear on these topics, stay tuned!


· 13 min read

This is part one of a three-part article. This article provides the necessary background and rationale of the series. The next article will be a detailed explanation of the actual steps necessary to implement the solution. In the final article, we will explore what we have just created and understand what was just created


The Problem With Pulling

Prometheus is a server which wants to reach out and pull data from "scrape targets". It will generally do this using http requests. One problem with this design is that these targets are often inaccessible, hidden from Prometheus behind a firewall.

If not hidden, it means some port was exposed on some network, thereby giving Prometheus the ability to pull the data it needs. Exposing that port on a "trusted" network is a possible attack vector for bad actors. Exposing that port on the open internet (as is often the case) is an open invitation for attack. It's much better to keep these servers totally dark to all networks.

OpenZiti solves this problem of reach elegantly and natively while also keeping your service dark to all networks. This gives an OpenZiti-enabled Prometheus the ability to literally scrape any target, anywhere. As long as the target is participates on an OpenZiti overlay network, and as long as the proper polices are in place allowing the traffic to flow, Prometheus will be able to reach out and pull the data it needs from anything, anywhere.

It doesn't matter if the target is in some private cloud data center, some private data center protected by a corporate firewall, or heck even running inside my local docker environment! As long as the target participates on that OpenZiti Prometheus can scrape it! That sort of reach is impossible with classic networks.

Prometheus

Prometheus is an incredibly popular CNCF project which has graduated the gauntlet of progressions to emerge as a "graduated" CNCF project. If you're familiar with Prometheus, there are probably a couple of reasons people mainly choose to deploy it: metrics collection and visualization and alerting.

Prometheus is also tremendously flexible. It has numerous available plugins and supports integrating with a wide number of systems. According to this CNCF survey , Prometheus leads the pack when it comes to the project people go to for observability. Its popularity is probably because Prometheus is a CNCF project and is often considered the "default" solution to deploy on another wildly popular CNCF project called Kubernetes. One interesting aspect of Prometheus is that it generally favors a poll-based approach to metrics collection instead of a push-based model.

Poll-based?

I don't know about you, but historically when I've thought about a metrics collection agent, I tend to think of an agent that reads a log file or some library that pushes rows into a giant data lake in the cloud. I don't generally think about a solution that implements poll-based metrics. Often, this is because the target of a poll-based collecting agent will probably be behind a firewall.

FW

As you would expect, firewalls make it exceptionally difficult to implement a poll-based solution as firewalls have been known to make a habit of preventing external actors from accessing random http servers behind it! After all, that is their primary function!

The Prometheus project makes strong arguments explaining the benefits of a poll-based solution. They also realize that firewalls are important in creating a safe network and understand the challenges firewalls create for such a solution. To deal with these situations, the project also provides a PushGateway. This allows solutions to push their data to a location outbound of the firewall. Pushing data out of the firewall allows metrics and alerting to function without the worry (and maintenance heartache) of an open, inbound firewall hole.

Acceptable Risk

Prometheus is often deployed into Kubernetes clusters, but it can be deployed anywhere. Taking the operational differences out of the equation, there is little difference between deploying Prometheus in a Kubernetes cluster and deploying it in one's data center. Once deployed, the needs will be the same. Prometheus will need to be authorized to reach out and scrape the targets it needs to scrape. All too often, this is done with relatively open network permissions. Even though we all know it's not the most secure way of authorizing Prometheus, this is often considered "safe enough" because we deployed Prometheus into a zone considered "safe". Managing firewall rules to all the computers Prometheus needs access to, feels like an impossible feat. There are just too many.

To add to our acceptable risk, we will need to be able to access the Prometheus server in some way. We'll want to get at the UI, see the charts and graphs and data it provides and use the server to its fullest. For that, we'll of course need a hole in our firewall, or in the case of Kubernetes we will probably deploy some form of Kubernetes Ingress Controller to provide users access the service.

What we need are better and richer controls over our network. We need a better way of authorizing Prometheus without the hassle of maintaining firewall rules on individual machines. We also need a way to do this across multiple clouds, multiple Kubernetes clusters and multiple data centers. Let's see how OpenZiti can solve this problem while also enhancing our overall security.


OpenZiti

The OpenZiti project allows us to solve all the problems outlined above. It is a fully-featured, zero trust overlay network and enables zero trust networking principles to be applied anywhere. This includes bringing those zero trust principles directly into your application through one of the many SDKs provided by the project. Let's look at an example and see what a setup might look like before and after applying OpenZiti.

Overview

Let's imagine that we have already deployed a solution using two Kubernetes clusters, ClusterA and ClusterB. It doesn't matter where the clusters are deployed. We are trying to illustrate a real-world situation where we have two separate Kubernetes clusters that we want to manage. The clusters could be deployed in the same cloud provider, in a private data center, in different cloud providers, it really does not matter. What is important, is that these clusters are available over the network. To enable access to the workloads inside the clusters, some form of Kubernetes ingress controller will be required on both clusters. In our example, we will have a workload deployed which exposes a prometheus scrape target we want Prometheus to monitor.

Figure 1 - Before OpenZiti

Before OpenZiti

Taking a Closer Look

Looking at the diagram above with a discerning eye towards security, there are some immediate observations one can make.

Listening Ports

One observation we have already accepted from the overview, is that these clusters must be exposed via the internet. At first that doesn't seem like a big deal, we expose workloads like this to the internet all the time. This is a perfectly normal action, it's likely done every day somewhere in the world. It's so common, we almost don't even think about it until the time comes when we need to think about it. This ends up in an exposed port, listening somewhere in the world. There might be a firewall with complex rules to protect this port, but it's just as likely that this isn't the case. People might need to access the resources inside these clusters from anywhere.

Kubernetes API Exposed

Another observation is that the Kubernetes API is fully exposed to the internet. This API is a very high-value target and should be secured as strongly as possible. That probably means yet another complex firewall rule to maintain.

"Trusted" Intra-cluster Traffic

The final point to note is that the traffic within the cluster is considered safe. As mentioned above, the Prometheus server needs to be able to scrape the target workloads. That traffic is necessary to be considered safe. Also, notice that the pod for Prometheus contains a container named "configmap-reload" which is used to trigger a webhook on the Prometheus server when the Kubernetes config map changes. This is necessary when changing the Prometheus config, adding new scrape configs etc.



authors: dovholuknf

Applying Zero Trust Networking Principles Using OpenZiti

Now that we understand the basic setup and understand some possible problems, let's see if OpenZiti can address one or more of these issues. When applying OpenZiti, the goal will be to strengthen our security posture for each of the above items.

Figure 2 - After OpenZiti

after

Taking a Closer Look After OpenZiti

No External Listening Ports

With a classic deployment as shown in the initial design, we know there will be ports exposed to the open internet. In an ideal scenario, there would be absolutely no ports exposed on the open internet nor in the "trusted networking zone". It's immediately obvious after applying a solution using OpenZiti that those listening ports exposed by the Kubernetes ingress controller are no longer deployed and thus are no longer exposed to the internet. That's one attack vector eliminated. OpenZiti will initiate outbound mTLS connections among all the constituent pieces of the overlay network. This means connections will begin inside the trusted network zone and only create outbound links. Once established, those connections can be used to safely transfer data between any participating edge node.

This capability really can't be emphasized enough. With OpenZiti and with applications that use an OpenZiti SDK, such as the ones shown, there are no open ports to attack. This network is nearly impervious to the classical "land and expand" technique so many bad actors look to exploit.

Kubernetes API no Longer Exposed

Another significant benefit provided by OpenZiti is starting to come into focus. By having access to our clusters provided through OpenZiti, we can stop exposing the Kubernetes APIs for both clusters to the open internet. Prometheus will still be able to monitor each Kubernetes cluster through the private Kubernetes network. Accessing Prometheus will be provided via OpenZiti, instead of using a Kubernetes ingress controller. Later, we can ues the built-in capability Prometheus already provides to federate information from the clusters to a centralized, zitified Prometheus server.

Once no longer exposed to the open internet, to maintain our Kubernetes cluster we could then turn to [zitified] (/zitification/index.md) tools. The OpenZiti project provides zitified versions of kubectl - kubeztl and helm - helmz. Each of these tools have an OpenZiti SDK embedded inside them. This allows both tools to connect to the private Kubernetes API over the OpenZiti overlay network. To use them, you will need a strong, OpenZiti identity as well as be authorized to access the service. Also note that we're also not replacing the existing security constraints the Kubernetes ecosystem already provides. You can (and should) still secure your Kubernetes clusters using namespaces, roles, etc.

We'll explore kubeztl and helmz in future articles.

"Trusted" Intra-cluster Traffic

Lastly, let's turn our eyes toward the traffic running inside the Kubernetes cluster. Pay attention to the lines in orange and the lines in dark blue. Orange lines represent "private" traffic, traffic that needs to traverse the private network space.

At this point we cannot send traffic to the Kubernetes API via the overlay network. The Kubernetes API doesn't have an OpenZiti SDK embedded within it. That means when we deploy Prometheus into ClusterA and ClusterB to monitor the cluster, Prometheus will be forced to connect to a port exposed on the cluster's underlay network. Still, while not ideal, we have greatly improved the overall security posture of the cluster. We're no longer able to access the Kubernetes API without first gaining access to the zero trust overlay network. Accessing the Kubernetes API will also require the identity to be properly authorized to access the service attaching to the Kubernetes API.

Let's now focus on ClusterA. It contains a Prometheus server that decided against listening on the OpenZiti overlay. This means it will need to expose ports to the Kubernetes underlay network. The container inside the Prometheus pod will watch for configmap changes. To trigger the webhook, it will be forced to send unauthenticated webhook traffic to the Prometheus server on the underlay network in order to trigger the config to reload.

Still, accessing this cluster and the listening Prometheus server will require being on the OpenZiti overlay. Also, this Prometheus server does have an OpenZiti SDK built into it. We also deployed the "reflectz" workload with an OpenZiti SDK built into it as well. That means the Prometheus server must scrape the "reflectz" workload exclusively over the OpenZiti overlay. Only authorized identities can access that scrape data.

Contrast ClusterA with ClusterB. ClusterB deployed a Prometheus server with an embedded OpenZiti SDK and chose to provide its services exclusively on the OpenZiti overlay. We've also deployed a zitified "reflectz" workload here. Notice how little traffic traverses the Kubernetes cluster underlay network. The only traffic which needs to traverse the cluster's underlay network in ClusterB is the traffic which monitors the Kubernetes API. All other traffic in the cluster is now secured by the OpenZiti overlay network. You will need a strong identity, and you will need to be authorized on the overlay before even being allowed to connect to the target service.

OpenZiti-Enabled Prometheus

We are now coming to the final piece of the puzzle. We have protected both Kubernetes clusters using OpenZiti. Now we want to bring all this data back to a centralized Prometheus server to make it easier on our user base. To do this, we'll again deploy an OpenZiti-enabled Prometheus server. This time we don't care where it is deployed except that we know we are not deploying it into either of the Kubernetes clusters we are already using. Since the Prometheus servers are all now accessible via the overlay network, we can literally deploy our server anywhere in the world. It could be on development server, it could be deployed in some other cloud, it could be deployed in our private data center. Because it's part of the overlay network, it no longer matters where we deploy the server. Wherever deployed, all it will need is outbound internet access, a strong identity, and access and authorization to services defined in the OpenZiti overlay network. Once that's done, OpenZiti will take care of the rest.

If you have made it this far you're might want to try all this for yourself. The next article will go into the details necessary to implement this solution. When complete you'll be able to deploy a zitified version of Prometheus and give Prometheus the power to scrape anything from anywhere using OpenZiti.

· 20 min read
Clint Dovholuk

This is part two of a three-part article. This article provides the technical deep dive into the steps necessary to implement the vision outlined in part one. This article will be heavy on OpenZiti CLI commands, explaining what we are doing to configure the overlay network, and why. In the final article, we will explore what we have just created and understand what was just created


Goals

  • Incredibly easy to deploy Prometheus servers
  • No ports exposed to the internet
  • Prometheus servers can be deployed listening on the overlay, not on the underlay
  • Private Kubernetes API

Zitified Prometheus

As described in the previous article, Prometheus really prefers to be able to gather metrics from the targets it is monitoring. When the target is behind a firewall, you will be left with two choices. superhero You can choose to open a hole in the firewall granting access (a generally bad idea), or you can use a PushGateway. Even if you choose to use the PushGateway, Prometheus will still need to be able to access and pull from the PushGateway so you'll still need some port open and listening for Prometheus to collect data.

What we really want is to enable Prometheus to scrape data from targets without needing to expose any ports to the internet. It would be even better if we didn't have to expose any ports at all, even to the local "trusted" network. This capability is something that is unique to an OpenZiti-enabled application. You can take an OpenZiti SDK and embed it into your application, and give your app zero trust superpowers! If we take an OpenZiti SDK and embed it into Prometheus, we can give Prometheus the superpower of invisibility and addressability. Embedding an OpenZiti SDK produces a zitified version of Prometheus. With an OpenZiti-powered Prometheus, no ports need to be open.

The OpenZiti project has done the work to produce an OpenZiti-enabled version of Prometheus. It's also entirely open source. Check it out from the OpenZiti Test Kitchen hosted on GitHub https://github.com/openziti-test-kitchen/prometheus.

Solution Overview

As you'll recall from part1, we are trying to use Prometheus to monitor workloads in two different Kubernetes clusters. We are going to deploy one cluster which will represent a first step of an OpenZiti solution. It will use a Prometheus sever which is OpenZiti-enabled, but it will still listen on the underlay network and be available to local devices on an ip:port. This Prometheus server will use OpenZiti to scrape targets which are available anywhere on the OpenZiti overlay network and we'll refer to this as "ClusterA".

We'll also deploy a second OpenZiti-enabled Prometheus server, in a totally separate Kubernetes cluster. This Prometheus server will not listen on an ip:port. Instead, it will listen exclusively on the OpenZiti overlay. This Prometheus server will have no ports available to attack and will only be accessible via a properly authorized and authenticated OpenZiti client. This will be our "ClusterB"

Finally, we'll stand up a third Prometheus server and use it to federate metrics back to a "central" Prometheus server. This emulates what one might do to provide a central location for humans to go to in order to visualize data or use the Prometheus server. We won't care where this is deployed, we'll actually deploy it in locally and then move it to a private server in AWS just to show how easy it that is.

This is what the solution we'll build looks like: after


Digging In

Let's get to work and build this solution. We'll need some legwork done first.

[!NOTE] It's going to get deep in this article with CLI commands. You'll see what the OpenZiti objects are that get created and learn why. You might not want to replicate the solution on your own and instead are looking for "the big reveal". If that describes you, just skim this article lightly and get on to part3. In part 3 we'll explore the deployed solution and see what makes it so interesting and cool.

Prerequisites

construction worker

  • You have an OpenZiti overlay network available. If not, for this scenario you will want to use "host your own". You'll also want to have the ziti cli tool on your path
  • Two Kubernetes clusters provisioned
  • Necessary tooling installed and available on the path
    • kubectl
    • helm
  • bash/zsh shell - tested in bash and some commands will use variables. If you use another shell, change accordingly
  • a machine with docker installed to run the final Prometheus sever on (your local machine is fine)
  • Ziti Desktop Edge installed on the development machine. I use Ziti Desktop Edge for Windows.
  • A temporary folder exists to house files as we need them: /tmp/prometheus

ClusterA - Using ziti-host

clusterA

We start with an empty OpenZiti network, and two empty Kubernetes clusters. Let's start by populating ClusterA. We will deploy three pods into this Kubernetes cluster. When done, the Kubernetes cluster will look similar to the image to the right.

  • Pod 1. ziti-host. This pod will provide what is effectively the equivalent of a Kubernetes ingress controller. We'll install this using helm from a NetFoundry provided chart
  • Pod 2. prometheuz. This pod will be our Prometheus server with OpenZiti embedded in it. We won't use OpenZiti to listen on the overlay network. Instead, we will follow a more traditional model of listening on the underlay at a known ip:port combination. We'll install this pod using a chart from the OpenZiti charts repository.
  • Pod 3. reflectz. This pod represents the workload which we want to monitor. This is another chart provided by the OpenZiti chart repository and will also be installed with helm. If you are interested in viewing the source code for this project you can find it on GitHub here

[!NOTE] Running the ziti cli commands shown below as shown will expect you to have the ziti binary on your path. Also it is expected that all the commands run will run from the same "development" machine with the expected tools available. Reach out on discourse if you get stuck.

Pod 1 - ziti-host

We will start off deploying Pod 1, ziti-host, to provide access to Kubernetes ClusterA. The ziti-host pod will require a single identity to be provisioned. We will use a shortened name for the cluster and we'll embed that name into the identity to make it easier for us to understand what identity we provisioned and why, should we ever need to reference these identities later. We'll refer to ClusterA as simply "kubeA". Let's make the identity now. Notice we are also passing the "-a" attribute during creation to add a role attribute to the identity of kubeA.services. This will be used later when setting up policies.

Create the Identity

ziti edge create identity device kubeA.ziti.id -o /tmp/prometheus/kubeA.ziti.id.jwt -a "kubeA.services"

You should see confirmation output such as:

New identity kubeA.ziti.id created with id: BeyyFUZFDR
Enrollment expires at 2022-04-22T01:18:53.402Z

Deploy ziti-host into ClusterA

Once created, we can use helm to install the ziti-host pod. The jwt is a one-use token and will be useless after being consumed by ziti-host. As this is probably your first time running this helm chart, you will need to install it. The command is idempotent to running it over and over is of no concern. Run the following:

helm repo add netfoundry https://netfoundry.github.io/charts/
helm repo update
helm install ziti-host netfoundry/ziti-host --set-file enrollmentToken="/tmp/prometheus/kubeA.ziti.id.jwt"

You will see the confirmation output from helm. Now when you look at your Kubernetes cluster with kubectl, you will see a pod deployed:

kubectl get pods
NAME READY STATUS RESTARTS AGE
ziti-host-db55b5c4b-rpc7f 1/1 Running 0 2m40s

Awesome, we have our first deployed pod. It's useless at the moment as we have defined no services, nor authorized any services. Right now there's nothing to connect to, so we can simply move on and install the next pod, reflectz.

Pod 2 - reflectz

The first pod we want to have access to is the reflectz pod. It is a workload we will deploy that will do two things. First, it will listen on the OpenZiti overlay network for connections. When a connection is made, and when bytes are sent, the workload sill simply return back to the caller whatever was sent to it adding "you sent me: " to the payload. It's not much, but it's a demo after all. The second service provided is a scrape target for Prometheus. There is one metric exposed by reflectz we will care about, the total number of connections established to this workload. This pod also needs an identity provisioned, and this time around we will also provision some services. We will also use the ziti cli to enroll this identity. This helm chart wants you to provide an enrolled identity as part of the helm command. Let's do all this now.

Create and Enroll the Identity

ziti edge create identity user kubeA.reflect.id -o /tmp/prometheus/kubeA.reflect.id.jwt
ziti edge enroll /tmp/prometheus/kubeA.reflect.id.jwt -o /tmp/prometheus/kubeA.reflect.id.json

Create Configs and Services (including Tunneling-based Access)

The reflectz chart also needs two services to be declared and specified at the time of the helm chart installation. We will want to be able to test the service to ensure they work. To enable testing the services, we will create two configs of type intercept.v1. This will allow identities using tunneling apps to be able to access the services, this is how we'll verify the services work. Make the configs and services now.

# create intercept configs for the two services
ziti edge create config kubeA.reflect.svc-intercept.v1 intercept.v1 \
'{"protocols":["tcp"],"addresses":["kubeA.reflect.svc.ziti"],"portRanges":[{"low":80, "high":80}]}'
ziti edge create config "kubeA.reflect.svc-intercept.v1.scrape" intercept.v1 \
'{"protocols":["tcp"],"addresses":["kubeA.reflect.scrape.svc.ziti"], "portRanges":[{"low":80, "high":80}], "dialOptions":{"identity":"kubeA.reflect.id"}}'

# create the two services
ziti edge create service "kubeA.reflect.svc" --configs "kubeA.reflect.svc-intercept.v1" -a "kubeA.reflect.svc.services"
ziti edge create service "kubeA.reflect.scrape.svc" --configs "kubeA.reflect.svc-intercept.v1.scrape"

Authorize the Workload and Clients

Services are not valuable if there are no identities which can use the services. The identity used in the helm installation will also need to be authorized to bind these services. Tunneling apps will need to be authorized to dial these services but also remember Prometheus servers will need to be able to dial these services too. We will now create service-policies to authorize the tunneling clients, Prometheus scrapes, and the reflectz server to bind the service.

# create the bind service policies and authorize the reflect id to bind these services
ziti edge create service-policy "kubeA.reflect.svc.bind" Bind \
--service-roles "@kubeA.reflect.svc" --identity-roles "@kubeA.reflect.id"
ziti edge create service-policy "kubeA.reflect.scrape.svc.bind" Bind \
--service-roles "@kubeA.reflect.scrape.svc" --identity-roles "@kubeA.reflect.id"

# create the dial service policies and authorize the reflect id to bind these services
ziti edge create service-policy "kubeA.reflect.svc.dial" Dial \
--service-roles "@kubeA.reflect.svc" --identity-roles "#reflectz-clients"
ziti edge create service-policy "kubeA.reflect.svc.dial.scrape" Dial \
--service-roles "@kubeA.reflect.scrape.svc" --identity-roles "#reflectz-clients"

Deploy reflectz

With the identity enrolled, we can now install the helm chart from openziti, and install our demonstration workload: reflectz. Notice that to deploy reflectz we need to supply an identity to the workload using --set-file reflectIdentity. This identity will be used to 'Bind' the services the workload exposes. We also need to define what the service names are we want to allow that identity to bind. We do this using the --set serviceName and --set prometheusServiceName flags.

helm repo add openziti-test-kitchen https://openziti-test-kitchen.github.io/helm-charts/
helm repo update
helm install reflectz openziti-test-kitchen/reflect \
--set-file reflectIdentity="/tmp/prometheus/kubeA.reflect.id.json" \
--set serviceName="kubeA.reflect.svc" \
--set prometheusServiceName="kubeA.reflect.scrape.svc"

After running helm, pod 2 should be up and running. Let's take a look using kubectl

kubectl get pods
NAME READY STATUS RESTARTS AGE
reflectz-775bd45d86-4sjwh 1/1 Running 0 7s
ziti-host-db55b5c4b-rpc7f 1/1 Running 0 4m

Pod 3 - Prometheuz

Overlay Work - Setting Up OpenZiti

Now we have access to the cluster and a workload to monitor. Now we want to deploy Prometheus and monitor this workload. Remember that the workload only exposes a scrape target over the OpenZiti overlay. For Prometheus to be able to scrape the workload, even when resident inside the Kubernetes cluster (!), Prometheus will need to be OpenZiti-enabled. That will require a few things. We'll need a new identity for Prometheus, we'll need to authorize Prometheus to access the workload's target, and we'll need to configure Prometheus to scrape that workload. When we create this identity we'll assign two attributes. The reflectz-clients attribute gives this identity the ability to dial the two services defined above. The prometheus-clients attribute is currently unused. We'll put that to use later, but we can define it now.

Create and Enroll the Identity

# create and enroll the identity.
ziti edge create identity user kubeA.prometheus.id -o /tmp/prometheus/kubeA.prometheus.id.jwt -a "reflectz-clients","prometheus-clients"
ziti edge enroll /tmp/prometheus/kubeA.prometheus.id.jwt -o /tmp/prometheus/kubeA.prometheus.id.json

Create Configs and Services (including Tunneling-based Access)

# create the config and service for the kubeA prometheus server
ziti edge create config "kubeA.prometheus.svc-intercept.v1" intercept.v1 \
'{"protocols":["tcp"],"addresses":["kubeA.prometheus.svc"],"portRanges":[{"low":80, "high":80}]}'
ziti edge create config "kubeA.prometheus.svc-host.v1" host.v1 \
'{"protocol":"tcp", "address":"prometheuz-prometheus-server","port":80}'
ziti edge create service "kubeA.prometheus.svc" \
--configs "kubeA.prometheus.svc-intercept.v1","kubeA.prometheus.svc-host.v1"

Authorize the Workload and Clients

# grant the prometheus clients the ability to dial the service and the kubeA.prometheus.id the ability to bind
ziti edge create service-policy "kubeA.prometheus.svc.dial" Dial \
--service-roles "@kubeA.prometheus.svc" \
--identity-roles "#prometheus-clients"
ziti edge create service-policy "kubeA.prometheus.svc.bind" Bind \
--service-roles "@kubeA.prometheus.svc" \
--identity-roles "@kubeA.ziti.id"

Deploying Prometheuz

With our services, configs and service-policies in place we are now ready to start our Prometheus server. Remember this server will not listen on a the OpenZiti overlay. It's going to listen exclusively on the underlay. We are still exploring OpenZiti, and we are not yet comfortable deploying our Prometheus server dark. We'll change this soon, don't worry. For now, we'll imagine that we're still evaluating the tech and chose to deploy it on the underlay, not on the overlay.

Although Prometheus is listening on the underlay, we have deployed our workload listening on the overlay network. It won't be available on the underlay at all. The workload has no listening ports. This means that we'll still need an OpenZiti-enabled Prometheus to access and scrape that workload. To do this we'll use helm, and use a chart provided by the OpenZiti charts repo.

Some interesting things to notice below in the helm install command. Notice that we are passing helm two --set parameters. These parameters are informing the helm chart that the Prometheus server is not "zitified", meaning it will be accessible via the underlay network. We're also passing one --set-file parameter to tell Prometheus what identity we want to be stored in the pod (as a secret). This secret will be used when we configure Prometheus to scrape the workload. Go ahead and run this command now and run kubectl get pods until all the containers are running.

helm repo add openziti-test-kitchen https://openziti-test-kitchen.github.io/helm-charts/
helm repo update
helm install prometheuz openziti-test-kitchen/prometheus \
--set server.ziti.enabled="false" \
--set-file server.scrape.id.contents="/tmp/prometheus/kubeA.prometheus.id.json"

ClusterB - Fully Dark

clusterB

Now that we have deployed our first Kubernetes cluster, it's now time to deploy the second Kubernetes cluster. This time, we are going to keep our entire deployment fully dark! There will be no listening ports, not even local to the Kubernetes cluster itself. To get any traffic to this Prometheus server, you will need a strong identity and need to be authorized on the OpenZiti overlay. When complete, ClusterB will look like the image to the right.

This time, "Pod1" will be the reflectz workload. Since this is a fully dark deployment, listening entirely on the OpenZiti overlay, we won't need a ziti-host pod. Remember, in ClusterA ziti-host is used to provide internal access to the Kubernetes cluster via the OpenZiti overlay. It's similar in role to an ingress controller, but doesn't require you to expose your workloads to the internet. While that's pretty good we want to go fully dark this time. We'll have no ziti-host. We'll only need to deploy two pods: reflectz and prometheuz.

The good news is that the same commands you've run for ClusterA, will mostly be used for ClusterB. You will want to beware that where you used "kubeA" before, make sure you change those to "kubeB". There will be small other changes we'll make along the way too, we'll see those changes and explain them below.

Pod1 - reflectz

The relectz workload we'll deploy for ClusterB will be nearly identical to the ClusterA workload. We will create a service for the actual 'reflect' service. We will make a service for Prometheus to scrape the workload. We'll also need another identity, so we'll create that identity, authorize it to bind the services, and authorize clients to access the workload. Since this process is very similar to what we did for ClusterA, there's not much to explain. Setup ClusterB's reflectz now.

Create the Identity

ziti edge create identity user kubeB.reflect.id -o /tmp/prometheus/kubeB.reflect.id.jwt
ziti edge enroll /tmp/prometheus/kubeB.reflect.id.jwt -o /tmp/prometheus/kubeB.reflect.id.json

Create Configs and Services (including Tunneling-based Access)

# create intercept configs for the two services
ziti edge create config kubeB.reflect.svc-intercept.v1 intercept.v1 \
'{"protocols":["tcp"],"addresses":["kubeB.reflect.svc.ziti"],"portRanges":[{"low":80, "high":80}]}'
ziti edge create config "kubeB.reflect.svc-intercept.v1.scrape" intercept.v1 \
'{"protocols":["tcp"],"addresses":["kubeB.reflect.scrape.svc.ziti"], "portRanges":[{"low":80, "high":80}], "dialOptions":{"identity":"kubeB.reflect.id"}}'

# create the two services
ziti edge create service "kubeB.reflect.svc" --configs "kubeB.reflect.svc-intercept.v1" -a "kubeB.reflect.svc.services"
ziti edge create service "kubeB.reflect.scrape.svc" --configs "kubeB.reflect.svc-intercept.v1.scrape"

Authorize the Workload to Bind the Services

# create the bind service policies and authorize the reflect id to bind these services
ziti edge create service-policy "kubeB.reflect.svc.bind" Bind \
--service-roles "@kubeB.reflect.svc" --identity-roles "@kubeB.reflect.id"
ziti edge create service-policy "kubeB.reflect.scrape.svc.bind" Bind \
--service-roles "@kubeB.reflect.scrape.svc" --identity-roles "@kubeB.reflect.id"

Authorize Clients to Access the Services

# create the dial service policies and authorize the reflect id to bind these services
ziti edge create service-policy "kubeB.reflect.svc.dial" Dial \
--service-roles "@kubeB.reflect.svc" --identity-roles "#reflectz-clients"
ziti edge create service-policy "kubeB.reflect.svc.dial.scrape" Dial \
--service-roles "@kubeB.reflect.scrape.svc" --identity-roles "#reflectz-clients"

Deploy reflectz

helm repo add openziti-test-kitchen https://openziti-test-kitchen.github.io/helm-charts/
helm repo update
helm install reflectz openziti-test-kitchen/reflect \
--set-file reflectIdentity="/tmp/prometheus/kubeB.reflect.id.json" \
--set serviceName="kubeB.reflect.svc" \
--set prometheusServiceName="kubeB.reflect.scrape.svc"

Pod 2 - Prometheuz

For ClusterB we want Prometheuz to be totally dark. It will exclusively listen on the OpenZiti overlay and there will be no listening ports on the underlay. We will need another identity, of course, and most of the configuration and commands appear the same on the surface with very subtle differences. We'll explore these differences as we go. In this section we'll be making an identity, one config (a difference from the ClusterA install), a service, and two service-policies. Let's get to it.

Create the Identity

ziti edge create identity user kubeB.prometheus.id -o /tmp/prometheus/kubeB.prometheus.id.jwt -a "reflectz-clients","prometheus-clients"
ziti edge enroll /tmp/prometheus/kubeB.prometheus.id.jwt -o /tmp/prometheus/kubeB.prometheus.id.json

Create One Config and Service

Here's a difference from ClusterA. Since we are going to listen on the OpenZiti overlay, we are not installing ziti-host. That means we don't need to create a host.v1 config. A host.v1 config is necessary for services which have a 'Bind' configuration and are being bound by a tunneling application. We're not doing that, here Prometheus will 'Bind' this service, thus we don't need that host.v1 config.

# create the config and service for the kubeB prometheus server
ziti edge create config "kubeB.prometheus.svc-intercept.v1" intercept.v1 \
'{"protocols":["tcp"],"addresses":["kubeB.prometheus.svc"],"portRanges":[{"low":80, "high":80}], "dialOptions": {"identity":"kubeB.prometheus.id"}}'
# no need for the host.v1 config
ziti edge create service "kubeB.prometheus.svc" \
--configs "kubeB.prometheus.svc-intercept.v1"

Authorize Clients and Prometheus to Bind the Service

At first, these commands appear identical. You need to look very closely to notice the difference between these command and the ones we ran for ClusterA, other than the obvious changes from "kubeA" to "kubeB". Pay close attention to the supplied --identity-roles for the bind policy specified below. With ClusterA, we did not have Prometheus listen on the overlay and we allowed Prometheus to listen on the underlay. That meant we needed to deploy ziti-host into that cluster to provide access to the service, and that means the service had to be bound by the ziti-host identity.

Here we are flipping that script. We are allowing Prometheus to bind this service! That means we'll need to authorize the kubeB.prometheus.id to be able to bind the service.

# grant the prometheus clients the ability to dial the service and the kubeB.prometheus.id the ability to bind
ziti edge create service-policy "kubeB.prometheus.svc.dial" Dial \
--service-roles "@kubeB.prometheus.svc" \
--identity-roles "#prometheus-clients"
ziti edge create service-policy "kubeB.prometheus.svc.bind" Bind \
--service-roles "@kubeB.prometheus.svc" \
--identity-roles "@kubeB.prometheus.id"

Deploying Prometheuz

At this point we have the OpenZiti overlay all configured. What's left, is to deploy Prometheus into ClusterB. This command is substantially different from what we ran while deploying Prometheus into ClusterA. You'll see that we need to supply two other identities for this installation. Remember, Prometheus will be entirely dark once deployed into ClusterB, listening only on the OpenZiti overlay. The container in the pod which monitors configmap changes won't be able to trigger a webhook using the underlay! This configmap-reloadz is a second "zitification" we didn't realize we were deploying in ClusterA, because we did not need it. We need it for ClusterB.

You'll see for configmapReload we need to supply the identity which the container will use to hit the Prometheus webhook. We do that by passing --set-file configmapReload.ziti.id.contents="/tmp/prometheus/kubeB.prometheus.id.json". Then we'll supply the service which configmap-reloadz will dial, and we'll also specify what identity we expect to be hosting the service.

Next you'll see we need to supply the identity to the Prometheus server we want to allow to listen on the OpenZiti overlay ( -set-file server.ziti.id.contents). Similar to configmap-reloadz we will also specify the service and identity name to bind.

Finally, to allow the server to scrape targets we need to supply a final identity which will be used when scraping targets with --set-file server.scrape.id.contents.

You'll notice for simplicities sake, we are using the same identity for all three needs which is perfectly fine. If you wanted to use a different identity, you could. That choice is up to you. To keep it simple we just authorized this identity for all these purposes.

# install prometheus
helm repo add openziti-test-kitchen https://openziti-test-kitchen.github.io/helm-charts/
helm repo update
helm install prometheuz openziti-test-kitchen/prometheus \
--set-file configmapReload.ziti.id.contents="/tmp/prometheus/kubeB.prometheus.id.json" \
--set configmapReload.ziti.targetService="kubeB.prometheus.svc" \
--set configmapReload.ziti.targetIdentity="kubeB.prometheus.id" \
--set-file server.ziti.id.contents="/tmp/prometheus/kubeB.prometheus.id.json" \
--set server.ziti.service="kubeB.prometheus.svc" \
--set server.ziti.identity="kubeB.prometheus.id" \
--set-file server.scrape.id.contents="/tmp/prometheus/kubeB.prometheus.id.json"

What's Next

In this article we've done a lot of OpenZiti CLI work, run some kubectl and helm commands but we still haven't explored what it is we are building and why it's so cool. We'll do that in the last, and final article. Hopefully, the payoff for you will be as rewarding as it was for me while building this article series.


Addendum - a Quicker Start

All the commands above are also available in github as .sh scripts. If you would prefer, you can clone the ziti-doc repository and access the scripts from the path mentioned below. "Cleanup" scripts are provided if desired.

${checkout_root}/docusaurus/blog/zitification/prometheus/scripts

· 12 min read
Clint Dovholuk

This is part three of a three-part article. This article builds on the previous two articles. Here we will take a look at what we built and use it to explore the power of a zitified Prometheus. See part one for the necessary background about the series. See part two for detailed instructions covering how to setup the environment you're about to explore

The Payoff

Ok. Here it is. We are at the end of the series and here is where we'll put it all together and really start to understand the sort of innovations you can create when you zitify an application. As a reminder, we are working with Prometheus, a CNCF project which we will use to monitor a workload deployed in two separate Kubernetes clusters. To save you from flipping back to a previous article, here is what that solution looks like.

overview

Now we are ready to start using our Prometheus servers. We'll use our OpenZiti overlay network to connect to a workload which will generate a metric we want to display in Prometheus. We'll then configure Prometheus to scrape the workload and put it on a graph to prove it works. Once that's complete, we'll play around with the setup and see if we really can scrape anything, anywhere. Let's begin.

Developer Access

In the previous article, we established our entire solution using the OpenZiti overlay, kubectl and helm. We saw everything get installed and it all "seems to work". But how do we know it works? Let's provision an identity for yourself now and let's enroll it in your local tunneling app and find out. Go out and get a tunneling client running locally. Once you have that installed, provision an identity and enroll it with your tunneling client.

ziti edge create identity user dev.client -a "prometheus-clients","reflectz-clients"

You should have access to six total services when this identity is enrolled:

Service Name: kubeA.prometheus.svc
Intercept: kubeA.prometheus.svc:80
Service Name: kubeA.reflect.svc
Intercept: kubeA.reflect.svc.ziti:80
Service Name: kubeA.reflect.scrape.svc
Intercept: kubeA.reflect.scrape.svc.ziti:80

Service Name: kubeB.prometheus.svc
Intercept: kubeB.prometheus.svc:80
Service Name: kubeB.reflect.svc
Intercept: kubeB.reflect.svc.ziti:80
Service Name: kubeB.reflect.scrape.svc
Intercept: kubeB.reflect.scrape.svc.ziti:80

ClusterA

With your developer access you should be able to navigate your browser to http://kubea.prometheus.svc/targets.

[!NOTE] We won't dwell on this for long in this article but notice that this is showing off another superpower of OpenZiti, private DNS. Notice that you were able to browse to a totally fictitious domain name: kubea.prometheus.svc. ".svc" is not a legitimate top level domain. Look at the full list of top level domains starting with S. You won't find ".svc" on that list at this time

kubea.prom.init

You should see the following. You might have noticed that the chart deployed has a few other containers we have not discussed yet. We'll not go into those containers in this article. What's important is that this Prometheus server has a few targets already for us to access. Neat, but this isn't what we want to actually monitor.

What we really want to monitor is the workload we deployed: reflectz. We can do this by editing the Prometheus configmap using kubectl. Let's go ahead and do this now:

kubectl edit cm prometheuz-prometheus-server

This will open an editor in your terminal and allow you to update the config map for the pod. Once the editor is open, find the section labeled "scrape_config" and add the following entry:

    - job_name: 'kubeA.reflectz'
scrape_interval: 5s
honor_labels: true
scheme: 'ziti'
params:
'match[]':
- '{job!=""}'
'ziti-config':
- '/etc/prometheus/scrape.json'
static_configs:
- targets:
- 'kubeA.reflect.scrape.svc-kubeA.reflect.id'

This is yaml and yaml is sensitive to spaces. The block above is properly indented for the config that the helm chart installs. You should be able to simply copy it and add it under the scrape_config. Remember, there is a configmap-reload container in the pod which monitors the configmap. On successful edit, this container will notice and will issue a web hook to the prometheus-server container. The trigger is not immediate, don't worry if it takes a while. It can take around a minute for the trigger to fire.

While we wait for the trigger, let's explain what this is doing. This is informing the Prometheus server to monitor a workload which can be found at the provided target of kubeA.reflect.scrape.svc-kubeA.reflect.id. Notice that no port is included in this target, and also notice that this is a very strange looking FQDN. That's because this is a zitified version of Prometheus. We have extended Prometheus to understand a "scheme" of ziti. When we configure this job with a scheme of ziti, we can then supply targets to the job which represent an OpenZiti service. We need to supply the ziti-config node with the path to the identity we want Prometheus to use to issue the scrape. This will always be /etc/prometheus/scrape.json at this time. Should the community desire it, we can look into changing the location of the identity.

If you would like to tail the configmap-reloadz container, you can issue this one liner. This will instruct kubectl to tail the logs from configmap-reloadz.

pod=$(kubectl get pods | grep server | cut -d " " -f1); echo POD: $pod; kubectl logs -f "$pod" prometheus-server-configmap-reload

When the trigger happens for ClusterA you will see a message like the one below. Notice that configmap-reloadz is using the underlay network: http://127.0.0.1:9090/-/reload

2022/04/23 20:01:23 config map updated
2022/04/23 20:01:23 performing webhook request (1/1/http://127.0.0.1:9090/-/reload)
2022/04/23 20:01:23 successfully triggered reload

Config Reloaded

Once you've correctly updated the configmap, and configmap-reloadz detected the change and told Prometheus to reload. You'll see a new target has been reported by Prometheus at http://kubea.prometheus.svc/targets. You should now see "kubeA.reflectz (1/1 up)" showing. Congratulations! You have just successfully scraped a target from zitified Prometheus! Remember this workload does not listen on the Kubernetes underlay network. It's only accessible from the OpenZiti overlay.

kubea.target1.png

Let's Graph It!

Cool, we have a target. The target can be scraped by Prometheus over the OpenZiti overlay. We're also able to securely access the Prometheus UI over the same OpenZiti overlay. Let's use the Prometheus UI to graph the data point we want to see, the reflect_total_connections metric.

  1. Navigate to http://kubea.prometheus.svc/graph
  2. enter reflect_total_connections
  3. click Graph (notice I changed my time to '10s', located just under Graph)
  4. click Execute
  5. Notice there are no connections (0)

grpah it

Generate Some Data

Now let's change that graph of reflect_total_connections from 0 to 1 (or more). One of the services you will have access to will intercept kubeA.reflect.svc.ziti:80.

[!NOTE] If you are using Windows and Windows Subsystem for Linux (WSL) as I am, you might need to understand how get WSL to use your Ziti Desktop Edge for Windows as your DNS resolver when inside WSL. Generally speaking this is as easy as editing /etc/resolv.conf and adding the IP as the first nameserver: nameserver 100.64.0.1 (or whatever the DNS IP is). Try it first, depending on how you setup WSL it might 'just work' for you. You can also just use cygwin or any other netcat tool from Windows (not WSL) too.

Now we can use netcat to open a connection through this intercept a few times. The metric tracks the total number of connections to the reflect service. Connect, send some text, the use ctrl-c to disconnect. Do that a few times then click 'execute' again on the graph page. You can see I did this over a minute and moved my total count on kubeA to 8, shown below.

/tmp/prometheus$ nc kubeA.reflect.svc.ziti 80
kubeA reflect test
you sent me: kubeA reflect test
^C
/tmp/prometheus$ nc kubeA.reflect.svc.ziti 80
another reflect test
you sent me: another reflect test
^C
/tmp/prometheus$ nc kubeA.reflect.svc.ziti 80
another reflect test
you sent me: another reflect test
^C

kubea.more.total.conn.png

Scrape Something Else

Hopefully you agree with me that this is pretty neat. Well what if we take it to the next level? What if we tried to scrape the workload we deployed to ClusterB? Could we get that to work? Recall from above how we enabled the job named 'kubeA.reflectz'. What if we simply copied/pasted that into the configmap changing kubeA --> kubeB. Would it work? Let's see.

# edit the configmap on ClusterA:
kubectl edit cm prometheuz-prometheus-server

#add the job - and wait for the configmap to reload

- job_name: 'kubeB.reflectz'
scrape_interval: 5s
honor_labels: true
scheme: 'ziti'
params:
'match[]':
- '{job!=""}'
'ziti-config':
- '/etc/prometheus/scrape.json'
static_configs:
- targets:
- 'kubeB.reflect.scrape.svc-kubeB.reflect.id'

After watching the logs from configmap-reloadz on ClusterA and seeing the webhook trigger. Just go back to the Prometheus server in the browser. You should be at the 'graph' url but if not navigate back and execute another graph for reflect_total_connections. When we do that it probably doesn't look much different but... Wait a second? In the legend? Can it be? That's right. From Kubernetes ClusterA, we have just scraped a workload from Kubernetes ClusterB, entirely over the OpenZiti overlay.

kubeA-and-kubeB.png

Generate some data like you did before by running a few netcat connection/disconnects and click 'Execute' again. Don't forget to send the connection request to kubeB though!

nc kubeB.reflect.svc.ziti 80
this is kubeb
you sent me: this is kubeb
^C
nc kubeB.reflect.svc.ziti 80
another to kube b
you sent me: another to kube b
^C
nc kubeB.reflect.svc.ziti 80
one more for fun and profit
you sent me: one more for fun and profit
^C

kubeB from kubeA

Scraping All the Things!

By now, you are probably starting to get the idea just how powerful this is for Prometheus. A zitified Prometheus can scrape things easily and natively by just deploying a Prometheuz instance into the location you want to scrape. Or, you can just enable a scrape target using a tunneling app, or in Kubernetes using the ziti-host helm chart. Let's complete our vision now and stand up a Prometheus server on our local workstation using Docker.

When we run Prometheuz locally using docker we'll need a config file to give to docker using a volume mount. We also provide the identity used to connect to the OpenZiti overlay in the same fashion. Let's start up a docker container locally and see if we can grab data from our two Prometheus instances using a locally deployed Prometheuz via docker.

GitHub has a sample Prometheus file you can download. Below, I used curl to download it and put it into the expected location.

curl -s https://raw.githubusercontent.com/openziti/ziti-doc/main/docusaurus/blog/zitification/prometheus/scripts/local.prometheus.yml > /tmp/prometheus/prometheus.config.yml

ziti edge create identity user local.prometheus.id -o /tmp/prometheus/local.prometheus.id.jwt -a "reflectz-clients","prometheus-clients"
ziti edge enroll /tmp/prometheus/local.prometheus.id.jwt -o /tmp/prometheus/local.prometheus.id.json

docker run \
-v /tmp/prometheus/local.prometheus.id.json:/etc/prometheus/ziti.id.json \
-v /tmp/prometheus/prometheus.config.yml:/etc/prometheus/prometheus.yml \
-p 9090:9090 \
openziti/prometheuz

local-docker-targets.png

Look at what we've just done. We have started a Prometheus instance locally, and used it to connect to four Prometheus targets via scrape configurations when all four targets are hidden entirely from my local computer (and any computer) unless the computer has an OpenZiti identity. I personally think that is incredibly cool!

Taking it to 11

But wait, I'm not done. That docker instance is listening on an underlay network. It's exposed to attack by anything on my local network. I want to fix that too. Let's start this docker container up listening only on the OpenZiti overlay. Just like in part 2 we will make a config, a service and two policies to enable identities on the OpenZiti overlay.

curl -s https://raw.githubusercontent.com/openziti/ziti-doc/main/docusaurus/blog/zitification/prometheus/scripts/local.prometheus.yml > /tmp/prometheus/prometheus.config.yml

# create the config and service for the local prometheus server
ziti edge create config "local.prometheus.svc-intercept.v1" intercept.v1 \
'{"protocols":["tcp"],"addresses":["local.prometheus.svc"],"portRanges":[{"low":80, "high":80}], "dialOptions": {"identity":"local.prometheus.id"}}'

ziti edge create service "local.prometheus.svc" \
--configs "local.prometheus.svc-intercept.v1"

# grant the prometheus clients the ability to dial the service and the local.prometheus.id the ability to bind
ziti edge create service-policy "local.prometheus.svc.dial" Dial \
--service-roles "@local.prometheus.svc" \
--identity-roles "#prometheus-clients"
ziti edge create service-policy "local.prometheus.svc.bind" Bind \
--service-roles "@local.prometheus.svc" \
--identity-roles "@local.prometheus.id"

Once that's done - let's see if we can start the docker container. The helm charts are configured to translate the --set flags provided into "container friendly" settings like environment variables, volumes and mounts etc. In docker we need to provide those. If you're familiar with docker these will probably all make sense. The most important part of the command below is the lack of a -p flag. The -p flag is used to expose a port from inside docker, outside docker. Look at the previous docker sample and you'll find we were mapping local underlay port 9090 to port 9090 in the docker container. In this example, we will do no such thing! :)

docker run \
-e ZITI_LISTENER_SERVICE_NAME=local.prometheus.svc \
-e ZITI_LISTENER_IDENTITY_FILE=/etc/prometheus/ziti.server.json \
-e ZITI_LISTENER_IDENTITY_NAME=local.prometheus.id \
-v /tmp/prometheus/prometheus.config.yml:/etc/prometheus/prometheus.yml \
-v /tmp/prometheus/local.prometheus.id.json:/etc/prometheus/ziti.id.json \
-v /tmp/prometheus/local.prometheus.id.json:/etc/prometheus/ziti.server.json \
openziti/prometheuz

But - Does It Work?

After configuring the OpenZiti overlay, we just need to open a browser and navigate to http://local.prometheus.svc/targets. SUCCESS!

local-docker-targets-no-listener.png

SUCCESS!

local-docker-graph-no-listener.png

Wrap Up

This was quite the journey and a lot of fun. We have taken a wildly popular open source project and brought OpenZiti to it with really not much code at all. Then using OpenZiti we were able to give Prometheus superpowers and enable it to scrape any target regardless of where that target is or what network it is on.

Think of the possibilities here. Are you a cloud provider looking to monitor your client's services which are deployed on-prem? That's so easy with OpenZiti and without sacrificing security at all. In fact, using OpenZiti like this provides amazing reach while strengthening the security posture of the solution because you're now using the concepts of zero trust networking principles and applying them to your alerting and monitoring solution.

What do you think? Was this series interesting? Do you think OpenZiti is cool and you are looking to try it out? What are you going to zitify? Tell us on twitter or on discourse! Both links are included in this page. Let us know what you think! Go star the openziti/ziti repo and help us spread the word of OpenZiti to the world!

· 5 min read
Clint Dovholuk

In the previous post we talked about how we could take a well-known application and improve its security by zitifying it, producing zssh. The logical next step after zitifying ssh would be to extend the functionality of zssh to cover moving files securely as well, enter zscp. A zitified scp effectively creates a more secure command line tool for sending and receiving files between ziti-empowered devices. Once zitified, we can use zscp using ziti identity names just like we did in zitifying ssh. I recommend reading the previous article) if you haven't to learn more about the benefits of zitifying tools like ssh and scp.


First Things First

zscp functions with the same prerequisites as zssh:

  • Establish a Ziti Network
  • Create and enroll two Ziti Endpoints (one for our ssh server, one for the client)
    • the sshd server will run ziti-tunnel for this demonstration. Conveniently it will run on the same machine I used to setup the Ziti Network.
    • the client, in this case, is my local machine and I'll zscp files both to and from both the remote machine.
  • Create the Ziti Service we'll use and authorize the two endpoints to use this service
  • Use the zscp binary from the client side and the ziti-tunnel binary from the serving side to connect
  • Harden sshd further by removing port 22 from any internet-based firewall configuration (for example, from within the security-groups wizard in AWS) or by forcing sshd to only listen on localhost/127.0.0.1

After ensuring these steps are complete, you will have the ability to copy files across your Ziti Network. The traffic will be even more secure since now a Ziti Network is required for the connection, requiring that strong identity before even being able to access the sshd server. And of course now sshd is 'dark' - it no longer needs the typical port 22 to be exposed to any network.

Given all the prerequisites are satisfied, we can put zscp to use. Simply download the binary for your platform:


Sending and Receiving Files with Zscp

Once you have the executable downloaded, make sure it is named zscp and for simplicity's sake we'll assume it's on the path. Just like zssh to ssh, zscp provides the same basic functionality as scp. As with most tooling, executing the binary with no arguments will display the expected usage.

There are two main functions of zscp. Just like scp you can send and receive from the remote host.

To send files we use this basic syntax:

./zscp LOCAL_FILEPATHS... <REMOTE_USERNAME>@TARGET_IDENTITY:REMOTE_FILEPATH

Then, to retrieve remote files we use a similar syntax:

./zscp <REMOTE_USERNAME>@TARGET_IDENTITY:REMOTE_FILEPATH LOCAL_FILEPATH

Below is a working example of using zscp to send a file to a remote machine. In this case the remote username is not the same as my local username. Just like with scp, I'll need to supply the username in my command and it will use the same syntax that regular scp uses. Here I am zscp'ing as username ubuntu to the remote computer that is joined to the Ziti Network using the identity named ziti-tunnel-aws.

./zscp local/1.txt ubuntu@ziti-tunnel-aws:remote
INFO connection to edge router using token 6c2e8b79-ce8e-483e-a9f8-a930530e706a
INFO sent file: /Users/name/local/1.txt ==> /home/ubuntu/remote/1.txt

This is only a basic example on how we can use zscp to send a singular file to a remote computer. In the next section, we will go over how to use zscp flags for extended functionality.


Zscp Flags

Just like zssh, zscp has the same flags to pass in: ssh key, ziti configuration file, service name, and one to toggle debug logging. All the defaults are the same as with zssh, thus both zscp and zssh will work without the -i and -c flag providing the files exist at the default locations. Refer to [zitifying-ssh][2] for instructions on how to use the flags below.

    -i, --SshKeyPath string   Path to ssh key. default: $HOME/.ssh/id_rsa
-c, --ZConfig string Path to ziti config file. default: $HOME/.ziti/zssh.json
-d, --debug pass to enable additional debug information
-s, --service string service name. (default "zssh")

In addition to the flags above, zscp has a flag to enable recursive copying:

    -r, --recursive           pass to enable recursive file transfer

To use the recursive flag, you must input a directory into the LOCAL_FILEPATH argument. Just like scp, zscp will copy all file contents under the provided directory. You can see below how we can use the -r flag to send all contents of big_directory.

Contents of big_directory on local computer:

tree local
local
└── big_directory
├── 1.txt
├── 2.txt
├── 3.txt
├── small_directory1
│ └── 4.txt
├── small_directory2
│ └── 5.txt
└── small_directory3
└── 6.txt

Here is the command and output:

$ zscp -r big_directory ubuntu@ziti-tunnel-aws:remote
INFO connection to edge router using token d6c268ee-e4f5-4836-bd38-2fc1558257aa
INFO sent file: /Users/name/local/big_directory/1.txt ==> /home/ubuntu/remote/big_directory/1.txt
INFO sent file: /Users/name/local/big_directory/2.txt ==> /home/ubuntu/remote/big_directory/2.txt
INFO sent file: /Users/name/local/big_directory/3.txt ==> /home/ubuntu/remote/big_directory/3.txt
INFO sent file: /Users/name/local/big_directory/small_directory1/4.txt ==> /home/ubuntu/remote/big_directory/small_directory1/4.txt
INFO sent file: /Users/name/local/big_directory/small_directory2/5.txt ==> /home/ubuntu/remote/big_directory/small_directory2/5.txt
INFO sent file: /Users/name/local/big_directory/small_directory3/6.txt ==> /home/ubuntu/remote/big_directory/small_directory3/6.txt

After zssh'ing to the remote machine, we can prove that all files have been transferred to remote device:

ubuntu@IP:~$ tree remote/
remote/
└── big_directory
├── 1.txt
├── 2.txt
├── 3.txt
├── small_directory1
│ └── 4.txt
├── small_directory2
│ └── 5.txt
└── small_directory3
└── 6.txt

Recursive copying also works to retrieve all contents of a directory on the remote machine.


I hope this post has helped you get familiar with another ziti-empowered developer's tool and hopefully it's becoming more clear why zitifying your application will make it more resilient to attack and make the act of connecting to remote services trivial.

Have a look at the code over at GitHub or continue reading on to the next zitification - kubectl!

· One min read
Clint Dovholuk
# establish some variables which are used below
service_name=zscpSvc
client_identity="${service_name}"Client
server_identity="${service_name}"Server
the_port=22

# create two identities. one host - one client. Only necessary if you want/need them. Skippable if you
# already have an identity. provided here to just 'make it easy' to test/try
ziti edge create identity device "${server_identity}" -a "${service_name}"ServerEndpoints -o "${server_identity}".jwt
ziti edge create identity device "${client_identity}" -a "${service_name}"ClientEndpoints -o "${client_identity}".jwt

# if you want to modify anything, often deleting the configs/services is easier than updating them
# it's easier to delete all the items too - so until you understand exactly how ziti works,
# make sure you clean them all up before making a change
ziti edge delete config "${service_name}"-host.v1
ziti edge delete config "${service_name}"-client-config
ziti edge delete service "${service_name}"
ziti edge delete service-policy "${service_name}"-binding
ziti edge delete service-policy "${service_name}"-dialing

ziti edge create config "${service_name}"-host.v1 host.v1 '{"protocol":"tcp", "address":"localhost","port":'"${the_port}"', "listenOptions": {"bindUsingEdgeIdentity":true}}'# intercept is not needed for zscp/zssh but make it for testing if you like
ziti edge create config "${service_name}"-client-config intercept.v1 '{"protocols":["tcp"],"addresses":["'"${service_name}.ziti"'"], "portRanges":[{"low":'"${the_port}"', "high":"'${the_port}"'}]}'
ziti edge create service "${service_name}" --configs "${service_name}"-client-config,"${service_name}"-host.v1
ziti edge create service-policy "${service_name}"-binding Bind --service-roles '@'"${service_name}" --identity-roles '#'"${service_name}"'ServerEndpoints'
ziti edge create service-policy "${service_name}"-dialing Dial --service-roles '@'"${service_name}" --identity-roles '#'"${service_name}"'ClientEndpoints'

· 8 min read
Clint Dovholuk

As we learned in the opening post, "zitifying" an application means to embed a Ziti SDK into an application and leverage the power of a Ziti Network to provide secure, truly zero-trust access to your application no matter where in the world that application goes. In this post we are going to see how we have zitified ssh and why. Future posts will expand on this even further by showing how NetFoundry uses zssh to support our customers.


Why SSH?

As I sit here typing these words, I can tell you're skeptical. I can tell you're wondering why in the world we would even attempt to mess with ssh at all. After all, ssh has been a foundation of the administration of not only home networks but also corporate networks and the internet itself. Surely if millions (billions?) of computers can interact every day safely and securely using ssh there is "no need" for us to be spending time zitifying ssh right? (Spoiler alert: wrong)

I'm sure you've guessed that this is not the case whatsoever. After all, attackers don't leave ssh alone just because it's not worth it to try! Put a machine on the open internet, expose ssh on port 22 and watch for yourself all the attempts to access ssh using known default/weak/bad passwords flood in. Attacks don't only come from the internet either! Attacks from a single compromised machine on your network very well could behave in the same way as an outside attacker. This is particularly true for ransomware-style attacks as the compromised machine attempts to expand/multiply. The problems don't just stop here either. DoS attacks, other zero-day type bugs and more are all waiting for any service sitting on the open internet.

A zitified ssh client is superior since the port used by ssh can be eliminated from the internet-based firewall preventing any connections whatsoever from any network client. In this configuration the ssh process is effectively " dark". The only way to ssh to a machine configured in this way is to have an identity authorized for that Ziti Network.

It doesn't stop there though. A Ziti Network mandates the use of a strong identity. You cannot access any services defined in a Ziti Network without having gone through the enrollment process to create a strong identity used for bidirectional authentication and authorization. With Ziti, you can't even connect to SSH without first being authorized to connect to the remote SSH server.

Contrast that to SSH. With SSH you need access the sshd port before starting the authentication process. This requires the port to be exposed to the network, exposing it to attack. With SSH you are also usually allowed to authenticate without providing a strong identity using a username and password. Even if you are choosing to use the more secure pub/private key authentication for SSH, the remote machine still needed the public key added to the authorized_keys file before allowing connections to it via SSH. This is all-too-often a step which a human will do, making the process of authorizing a user or revoking access relatively cumbersome. Ziti provides a secure, centralized location to manage authorization of users to services. Ziti makes it trivial to grant or revoke access to a given set of services to users immediately.

Lastly, Ziti provides support for continual authorization through the use of policy checks. These policy checks run continuously. If a user suddenly fails to meet a particular policy, access to the services provided via the Ziti Network are revoked immediately.

Cool right? Let's see how we did it and how you can do the same thing using a Ziti Network.

Overview of SSH - notice how port 22 is open to inbound connections:

ssh-overview.svg


How It's Done

There are a few steps necessary before being able to use zssh:

  • Establish a Ziti Network
  • Create and enroll two Ziti Endpoints (one for our ssh server, one for the client)
    • the sshd server will run ziti-tunnel for this demonstration. Conveniently it will run on the same machine I used to setup the Ziti Network.
    • the client will run zssh from my local machine, and I'll zssh to the other endpoint
  • Create the Ziti Service we'll use and authorize the two endpoints to use this service
  • Use the zssh binary from the client side and the ziti-tunnel binary from the serving side to connect
  • Harden sshd further by removing port 22 from any internet-based firewall configuration (for example, from within the security-groups wizard in AWS) or by forcing sshd to only listen on localhost/127.0.0.1

Overview of ZSSH - notice port 22 is no longer open to inbound connections:

zssh-overview.svg

After performing these steps you'll have an sshd server that is dark to the internet. Accessing the server via ssh must now occur using the Ziti Network. Since the service is no longer accessible directly through a network, it is no longer susceptible to the types of attacks mentioned previously!


Zssh in Action

Once the prerequisites are satisfied, we can see zssh in action. Simply download the binary for your platform:

Once you have the executable download, make sure it is named zssh and for simplicity's sake we'll assume it's on the path. A goal for zssh is to make the usage of the command very similar to the usage of ssh. Anyone familiar with ssh should be able to pick up zssh easily. As with most tooling, executing the binary with no arguments will display the expected usage. The general format when using zssh will be similar to that of ssh: zssh <remoteUsername>@<targetIdentity>

Below you can see me zssh from my local machine to the AWS machine secured by ziti-tunnel:

./zssh ubuntu@ziti-tunnel-aws
INFO[0000] connection to edge router using token 95c45123-9415-49d6-930a-275ada9ae06f
connected.
ubuntu@ip-172-31-27-154:~$

It really was that simple! Now let's break down the current flags for zssh and exactly how this worked.


Zssh Flags

We know that zssh requires access to a Ziti Network but it is not clear from the example above is where zzsh found the credentials required to access the network. zssh supports three basic flags:

-i, --SshKeyPath string Path to ssh key. default: $HOME/.ssh/id_rsa -c, --ZConfig string Path to ziti config file. default: $HOME/.ziti/zssh.json -d, --debug pass to enable additional debug information -h, --help help for this command -s, --service string service name. default: zssh (default "zssh")

What you see above is exactly the output zssh provides should you pass the -h/--help flag or execute zssh without any parameters. The -i/--SshKeyPath flag is congruent to the -i flag for ssh. You would use it to supply your key to the ssh client. Under the hood of zssh is a full-fledged ssh client that works similarly to how ssh does. If your ~/.ssh/id_rsa file is in the authorized_keys of the remote machine, then you won't need to specify the -i/ flag (as I didn't in my example). Using zssh requires the use of a public/private key in order for the zssh client to connect to the remote machine.

The -c/--ZConfig flag controls access to the network. A configuration file must be supplied to use zssh but does not need to be supplied as part of the command. By default, zssh will look at your home directory in a folder named .ziti for a file named zssh.json. In bash this is would be the equivalent of $HOME. In Windows this is the equivalent the environment variable named USERPROFILE. You do not need to supply this flag if a file exists at the default location. You can specify this flag to use zssh with other networks.

The -s/--service flag is for passing in a different service name other than "zssh". By default, the service name will be "zssh", but if you would like to access a different service use the -s flag followed by the service name.

The -d/--debug flag outputs additional information to assist you with debugging. For example:

$ ./zssh ubuntu@ziti-tunnel-aws -d
INFO[0000] sshKeyPath set to: /home/myUser/.ssh/id_rsa
INFO[0000] ZConfig set to: /home/myUser/.ziti/zssh.json
INFO[0000] username set to: ubuntu
INFO[0000] targetIdentity set to: ziti-tunnel-aws
INFO[0000] connection to edge router using token 95c45123-a234-412e-8997-96139fbd1938
connected.
ubuntu@ip-172-31-27-154:~$

Shown above is also one additional piece of information, the remote username. Shown in the example above I have zsshed to an ubuntu image in AWS. When it was provisioned AWS used the username ubuntu. In order to zssh to this machine I need to tell the remote sshd server that I wish to attach as the ubuntu user. If your username is the same for your local environment as the remote machine you do not need to specify the username. For example, my local username is cd (my initials). When I zssh to my dev machine I can simply use zssh ClintLinux:

$ ./zssh ClintLinux
INFO[0000] connection to edge router using token 909dfb4f-fa83-4f73-af8e-ed251bcd30be
connected.
cd@clint-linux-vm ~

Hopefully this post has been helpful and insightful. Zitifying an application is POWERFUL!!!!

The next post in this series will cover how we extended the same code we used for zssh and zitified scp.

Have a look at the code over at GitHub

· One min read
Clint Dovholuk
# establish some variables which are used below
service_name=zsshSvc
client_identity="${service_name}"Client
server_identity="${service_name}"Server
the_port=22

# create two identities. one host - one client. Only necessary if you want/need them. Skippable if you
# already have an identity. provided here to just 'make it easy' to test/try
ziti edge create identity device "${server_identity}" -a "${service_name}"ServerEndpoints -o "${server_identity}".jwt
ziti edge create identity device "${client_identity}" -a "${service_name}"ClientEndpoints -o "${client_identity}".jwt

# if you want to modify anything, often deleting the configs/services is easier than updating them
# it's easier to delete all the items too - so until you understand exactly how ziti works,
# make sure you clean them all up before making a change
ziti edge delete config "${service_name}"-host.v1
ziti edge delete config "${service_name}"-client-config
ziti edge delete service "${service_name}"
ziti edge delete service-policy "${service_name}"-binding
ziti edge delete service-policy "${service_name}"-dialing

ziti edge create config "${service_name}"-host.v1 host.v1 '{"protocol":"tcp", "address":"localhost","port":'"${the_port}"', "listenOptions": {"bindUsingEdgeIdentity":true}}'
# intercept is not needed for zscp/zssh but make it for testing if you like
ziti edge create config "${service_name}"-client-config intercept.v1 '{"protocols":["tcp"],"addresses":["'"${service_name}.ziti"'"], "portRanges":[{"low":'"${the_port}"', "high":'"${the_port}"'}]}'
ziti edge create service "${service_name}" --configs "${service_name}"-client-config,"${service_name}"-host.v1
ziti edge create service-policy "${service_name}"-binding Bind --service-roles '@'"${service_name}" --identity-roles '#'"${service_name}"'ServerEndpoints'
ziti edge create service-policy "${service_name}"-dialing Dial --service-roles '@'"${service_name}" --identity-roles '#'"${service_name}"'ClientEndpoints'