Skip to main content
Star us on GitHub Star

· 6 min read

This article walks you through building the Ziti C SDK for Linux-arm and running the wttr sample application on a BeagleBone SanCloud.

Configure the Host System

This article uses an Ubuntu 19.10 virtual machine as a development host because it's relatively easy to install a functional toolchain that targets arm-linux.

devbox$ sudo apt-get install gcc-arm-linux-gnueabihf g++-arm-linux-gnueabihf \
binutils-arm-linux-gnueabihf gdb-multiarch cmake git

Build the SDK and Sample Applications

devbox$ git clone --recurse-submodules https://github.com/netfoundry/ziti-sdk-c.git
Cloning into 'ziti-sdk-c'...
remote: Enumerating objects: 77, done.
remote: Counting objects: 100% (77/77), done.
remote: Compressing objects: 100% (50/50), done.
remote: Total 1287 (delta 35), reused 51 (delta 24), pack-reused 1210
Receiving objects: 100% (1287/1287), 475.44 KiB | 4.85 MiB/s, done.
...

devbox$ cd ziti-sdk-c
devbox$ mkdir build-Linux-arm
devbox$ cd build-Linux-arm
devbox$ cmake -DCMAKE_TOOLCHAIN_FILE=../toolchains/Linux-arm.cmake ..
project version: 0.9.2.1 (derived from git)
-- The C compiler identification is GNU 9.2.1
-- The CXX compiler identification is GNU 9.2.1
-- Check for working C compiler: /usr/bin/arm-linux-gnueabihf-gcc
...

$ make
[ 1%] Building C object deps/uv-mbed/deps/libuv/CMakeFiles/uv_a.dir/src/fs-poll.c.o
[ 1%] Building C object deps/uv-mbed/deps/libuv/CMakeFiles/uv_a.dir/src/idna.c.o
[ 2%] Building C object deps/uv-mbed/deps/libuv/CMakeFiles/uv_a.dir/src/inet.c.o
[ 2%] Building C object deps/uv-mbed/deps/libuv/CMakeFiles/uv_a.dir/src/random.c.o
...
[ 99%] Building C object programs/sample_wttr/CMakeFiles/sample_wttr.dir/sample_wttr.c.o
[ 99%] Linking C executable sample_wttr
[ 99%] Built target sample_wttr
[100%] Built target sample-host

When make completes, you'll have statically linked binaries for the SDK's sample applications.

Set up a Ziti Network

For this article we'll use a Ziti Edge Developer Edition to run our network. Follow the Ziti Network Quickstart.

Create the "demo-weather" Service

The sample_wttr application accesses a service named "demo-weather", so we'll create that service now. Log in to your Ziti Edge Developer Edition web UI and follow the steps:

  1. On the left side nav bar, click "Edge Services"
  2. In the top right corner of the screen click the "plus" image to add a new service
  3. Choose a name for the service. Enter "demo-weather"
  4. Choose Router "ziti-gw01"
  5. For Endpoint Service choose:
    • protocol = tcp
    • host = wttr.in
    • port = 80
  6. Click save

Upload the Artifacts to Your BeagleBone

At this point we have created all of the artifacts that are needed for running the sample application:

  • The "sample_wttr" executable
  • The Ziti identity json file (e.g. "NewUser.json")

Now we need to upload these artifacts to the BeagleBone. The scp command shown here assumes that:

  • You are in the build-Linux-arm subdirectory where the make command was executed above.
  • Your BeagleBone is running sshd and has an IP address of 192.168.2.2 which can be reached from your development host
  • The Ziti identity json file that was created when you followed the Ziti Network Quickstart was downloaded to your ~/Downloads directory.
devbox$ scp ./programs/sample_wttr/sample_wttr root@192.168.2.2:.
$ scp ~/Downloads/NewUser.json ./programs/sample_wttr/sample_wttr debian@192.168.2.2:.
NewUser.json 100% 6204 2.5MB/s 00:00
sample_wttr 100% 2434KB 5.4MB/s 00:00

Run the Application

Now we're ready to log into the BeagleBone and run the sample application. Let's go!

ubuntu@beaglebone:~$ ./sample_wttr ./NewUser.json
[ 0.000] INFO library/ziti.c:173 NF_init(): ZitiSDK version 0.9.2.1-local @de37e6f(wttr-sample-shutdown-cleanup) starting at (2019-09-05T22:35:12.259)
[ 0.000] INFO library/ziti.c:195 NF_init_with_tls(): ZitiSDK version 0.9.2.1-local @de37e6f(wttr-sample-shutdown-cleanup)
/home/scarey/repos/github.com/netfoundry/ziti-sdk-c/deps/uv-mbed/src/http.c:315 ERR TLS handshake error
/home/scarey/repos/github.com/netfoundry/ziti-sdk-c/deps/uv-mbed/src/http.c:153 WARN received -103 (software caused connection abort)
[ 0.210] ERROR library/ziti.c:433 version_cb(): failed to get controller version from ec2-54-164-120-24.compute-1.amazonaws.com:1280 CONTROLLER_UNAVAILABLE(software caused connection abort)
[ 0.210] WARN library/ziti_ctrl.c:49 code_to_error(): unmapped error code: CONTROLLER_UNAVAILABLE
[ 0.210] ERROR library/ziti.c:419 session_cb(): failed to login: CONTROLLER_UNAVAILABLE[-11](software caused connection abort)
ERROR: status => WTF: programming error
ubuntu@beaglebone:~#

Oops. Actually The Ziti SDK verifies the certificate from the Ziti Edge Controller, so we need to set the clock on the BeagleBone to a time/date that is within the valid range of the certificate. Might as well set the clock to the current time:

ubuntu@beaglebone:~# sudo rdate time.nist.gov
Wed Mar 18 15:46:56 2020

And now we are ready to run the application:

ubuntu@beaglebone:~$ ./sample_wttr ./NewUser.json
[ 0.000] INFO library/ziti.c:173 NF_init(): ZitiSDK version 0.9.2.1-local @de37e6f(wttr-sample-shutdown-cleanup) starting at (2020-03-18T15:46:57.536)
[ 0.000] INFO library/ziti.c:195 NF_init_with_tls(): ZitiSDK version 0.9.2.1-local @de37e6f(wttr-sample-shutdown-cleanup)
[ 0.554] INFO library/ziti.c:438 version_cb(): connected to controller ec2-54-164-120-24.compute-1.amazonaws.com:1280 version v0.9.0(ea556fc18740 2020-02-11 16:09:08)
[ 0.696] INFO library/connect.c:180 connect_get_service_cb(): got service[demo-weather] id[cc90410f-1017-4d23-977a-3695cb58f4e8]
[ 0.810] INFO library/connect.c:209 connect_get_net_session_cb(): got session[d89bfdd8-c7e5-42ff-a39f-63056eeb3a82] for service[demo-weather]
[ 0.810] INFO library/channel.c:148 ziti_channel_connect(): opening new channel for ingress[tls://ec2-54-164-120-24.compute-1.amazonaws.com:3022] ch[0]
sending HTTP request
request success: 99 bytes sent
HTTP/1.1 200 OK
Server: nginx/1.10.3
Date: Wed, 18 Mar 2020 15:47:00 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 8662
Connection: close

Weather report: Rochester

\ / Sunny
.-. 39 °F
― ( ) ― ↖ 0 mph
`-’ 9 mi
/ \ 0.0 in
┌─────────────┐
┌──────────────────────────────┬───────────────────────┤ Wed 18 Mar ├───────────────────────┬──────────────────────────────┐
│ Morning │ Noon └──────┬──────┘ Evening │ Night │
├──────────────────────────────┼──────────────────────────────┼──────────────────────────────┼──────────────────────────────┤
│ Overcast │ Overcast │ Cloudy │ Overcast │
│ .--. 32..35 °F │ .--. 35..41 °F │ .--. 39..44 °F │ .--. 37..42 °F │
│ .-( ). ↖ 3-4 mph │ .-( ). ← 6-8 mph │ .-( ). ← 9-16 mph │ .-( ). ↖ 9-17 mph │
│ (___.__)__) 6 mi │ (___.__)__) 6 mi │ (___.__)__) 6 mi │ (___.__)__) 6 mi │
│ 0.0 in | 0% │ 0.0 in | 0% │ 0.0 in | 0% │ 0.0 in | 0% │
└──────────────────────────────┴──────────────────────────────┴──────────────────────────────┴──────────────────────────────┘
┌─────────────┐
┌──────────────────────────────┬───────────────────────┤ Thu 19 Mar ├───────────────────────┬──────────────────────────────┐
│ Morning │ Noon └──────┬──────┘ Evening │ Night │
├──────────────────────────────┼──────────────────────────────┼──────────────────────────────┼──────────────────────────────┤
│ \ / Partly cloudy │ Cloudy │ \ / Partly cloudy │ _`/"".-. Patchy light d…│
│ _ /"".-. 41..44 °F │ .--. 50 °F │ _ /"".-. 53..55 °F │ ,\_( ). 50..53 °F │
│ \_( ). ← 4-7 mph │ .-( ). ← 4-6 mph │ \_( ). ↖ 6-11 mph │ /(___(__) ↖ 10-19 mph │
│ /(___(__) 3 mi │ (___.__)__) 6 mi │ /(___(__) 6 mi │ ‘ ‘ ‘ ‘ 4 mi │
│ 0.0 in | 0% │ 0.0 in | 0% │ 0.0 in | 0% │ ‘ ‘ ‘ ‘ 0.0 in | 20% │
└──────────────────────────────┴──────────────────────────────┴──────────────────────────────┴──────────────────────────────┘
┌─────────────┐
┌──────────────────────────────┬───────────────────────┤ Fri 20 Mar ├───────────────────────┬──────────────────────────────┐
│ Morning │ Noon └──────┬──────┘ Evening │ Night │
├──────────────────────────────┼──────────────────────────────┼──────────────────────────────┼──────────────────────────────┤
│ _`/"".-. Light rain sho…│ \ / Partly cloudy │ \ / Partly cloudy │ Cloudy │
│ ,\_( ). 62 °F │ _ /"".-. 66 °F │ _ /"".-. 48..51 °F │ .--. 46 °F │
│ /(___(__) ↑ 14-27 mph │ \_( ). ↗ 26-41 mph │ \_( ). → 24-36 mph │ .-( ). → 22-30 mph │
│ ‘ ‘ ‘ ‘ 6 mi │ /(___(__) 6 mi │ /(___(__) 6 mi │ (___.__)__) 6 mi │
│ ‘ ‘ ‘ ‘ 0.0 in | 29% │ 0.0 in | 59% │ 0.0 in | 41% │ 0.0 in | 0% │
└──────────────────────────────┴──────────────────────────────┴──────────────────────────────┴──────────────────────────────┘
Location: Rochester, Monroe County, New York, United States of America [43.157285,-77.6152139]

Follow @igor_chubin for wttr.in updates
request completed: Connection closed
[ 3.714] INFO library/ziti.c:238 NF_shutdown(): Ziti is shutting down
========================

· 15 min read
Andrew Martinez

Part 1: Encryption Everywhere

Whether you are an encryption expert or a newcomer, welcome! This series is for you! It assumes you know nothing and takes you from soup to nuts on how to bootstrap trust with the intent to power a Zero Trust security model. The process and thinking described in this series are the direct output of developing the same system for the Ziti open source project. Ziti can be found on the GitHub project page for OpenZiti. The series starts with the basics and dovetails into Ziti's Enrollment system.

The parts are as follows.

Zero Trust

This entire series assumes some familiarity with Zero Trust. If you do not have a strong background in it, that is fine. This section should give the reader enough context to make use of the entire series. If a more in-depth understanding is desired, please consider reading Zero Trust Networks: Building Secure Systems in Untrusted Networks by Evan Gilman.

Zero Trust is a security model that requires strict identity authentication and access verification on every connection at all times. It sets the tone for a system's security to say, "this system shall never assume the identity or access of any connection." Before the Zero Trust security models, IT infrastructures were set up as a series of security perimeters. Think of as a castle with walls and moats. The castle would have a set number of entry points with guards. Once past the guards and inside the castle, any visitors were trusted and had access to the castle. In the real world, passing the guards is analogous to authenticating with a machine or, at worst, connect the office network via WiFi or ethernet cable.

Zero Trust does away with the concept of having a central castle that assumes anyone inside is trusted. It assumes that the castle has already been breached. That is to say, we expect attackers to already be inside the network and for it to be a hostile environment. Any resources inside the network should be treated as being publicly available on the internet and must be defended. To accomplish this defense, a series of Zero Trust pillars are defined:

  • Never Trust, Verify - the virtue of a connection should not grant access
  • Authenticate Before Connect - authentication should happen before resources are connected to
  • Least Privileged Access - access should only grant connectivity to the minimum number of resources

Implementing those pillars is not a simple tweak to existing infrastructure. The first point alone will have much of this series dedicated to it.

Ziti & Zero Trust

In a Zero Trust model, there needs to exist mechanisms to verify identities such that trust can be granted. Zero Trust does not mean there is no trust. Zero Trust means that trust is given only after verification. Even then, that trust is limited to accessing the minimum network resources necessary. To accomplish this, we need a network that can force all connections through the following process.

  1. Authenticate
  2. Request Access To A Resource
  3. Connect To The Requested Resource

This process is not the typical connection order on a network. Most connections on a network are done in the reverse order. At first, this may seem counter-intuitive. To help make Zero Trust and bootstrapping trust a bit clearer, it helps to have a concrete system to use an example. It just so happens that the Ziti software system makes a great example!

Ziti System

In Ziti, all of the above steps require interacting with a Ziti Controller. The Ziti Controller manages the Ziti overlay network by maintaining a list of known network services, SDK clients, routers, enrollments, policies, and much more! All of these pieces working together to create a Ziti Network. A Ziti Network is an overlay network - meaning it creates a virtual network on top of a concrete network. The concrete network may be the internet, a university network, or your own home network. Whatever it is, it is referred to as the underlay network.

In the Ziti Network, all network resources are modeled as services in the Ziti Controller. All services on a Ziti Network should only be accessible via the Ziti Network for maximum effect. Network services can be made available via a Ziti Network in a variety of manners. The preferred method is embedding the Ziti SDK inside of applications and servers as it provides the highest degree of Zero Trust security. However, it is also possible to configure various overlay-to-underlay connections to existing network services via "router termination" or a particular type of application with the Ziti SDK embedded in it that specifically deals with underlay-to-overlay translations (i.e. Ziti Desktop Edge/Mobile Edge).

The Ziti Controller also knows about one or more Ziti Routers that form a mesh network that can create dynamic circuits amongst themselves. Routers use circuits to move data across the Ziti Network. Routers can be configured to allow data to enter and exit the mesh. The most common entry/exit points are Ziti SDKs acting as clients or servers.

Network clients wishing to attach to the network use the Ziti SDK to first authenticate with the Ziti Controller. During authentication, the Ziti SDK client and Ziti Controller will verify each other. Upon successful authentication, the Ziti Controller can provide a list of available services to dial (connect) or to bind (host) for the authenticated SDK Client. The client can then request to dial or bind a service. If fulfilled, a session is associated with the client and service. This new session is propagated to the necessary Ziti Routers, and the required circuits are created. The client is returned a list of Ziti Routers which can be connected to in order to complete the last mile of communication between the Ziti overlay network and the SDK client.

This set of steps covers the pillars of the Zero Trust model! The Ziti Controller and SDK Clients verify each other. The client cannot connect to network resources or services until it authenticates. After authentication, a client is given the least privilege access allowed by only being told about and only being able to dial/bind the authenticated identity's assigned services. It is a Zero Trust overlay network!

How did this system come into existence? How do the Ziti SDK client and Ziti Controller verify each other? How do the routers and controller know to validate each other? How is this managed at scale with hundreds of Ziti Routers and thousands of Ziti SDK clients? It seems that this is a recursive problem. To terminate the recursion, we have to start our system with a well-defined and carefully controlled seed of trust.

Trust

In software systems that require network connectivity, there are at least two parties in the system. Generally, there are more, and in the case of a Ziti network, there could be thousands. Between two parties, each time a connection is made, a trust decision is made. Should this connection be allowed? Mechanisms must be put into place to verify the identity of the connecting party if that question is to be answered.

One mechanism that might jump out at the reader is a password or secret. In Ziti, it would be possible to configure the Controller, Routers, and SDK Clients with a secret. Software is easy to deploy with a secret. Throw it into a configuration file, point the software at, and off you go!

It is also fundamentally weak as there is only one secret in the system necessary to compromise the entire system. In Ziti, this would mean giving the secret to network clients that may or may not be owned by the network operator. Also, shared secrets do not individually identify each component, nor do they define how secrets will power other security concerns, like encryption.

The solution can be improved. Secrets could be generated per software component. The controller, each router, and each SDK client could have a unique secret. This secret would then individually identity each component! It is a significant improvement, but how does each component verify connections? Do they challenge for the incoming connections secret and compare it to a list? That means that a pair of systems that need to connect must have each other's secrets. Secret sharing will not do! We can not be copying secrets between every machine. One machine that is compromised would mean that many secrets are revealed!

This solution can be evolved and improved, but we do not have to do that hard work! If we did, we would end up recreating an existing technology. That technology is (public-key cryptography)[https://en.wikipedia.org/wiki/Public-key_cryptography], and it provides everything we need.

Public-key cryptography allows each device to have a unique, secret, private key that proves its unique identity. That private key is mathematically tied to a public key. The public key can be used to encrypt messages that only the private key holder can decrypt. Also, the public key cannot be used to derive the original private key. This functionality fits perfectly with what our distributed system needs! Alas, public-key cryptography introduces complex behaviors, setup, and management. In the next article, we will dive a little deeper into this topic. For now, let us take it on faith that it will serve our needs well.

Setting It Up

So we have decided that public-key cryptography is the answer. What does that mean? What do I have to do? Let us explore what would need to be done by a human or a piece of software automating this process. Don't worry if you don't get all of this; the gist is all you need for now. Later articles will expand upon this terminology. In fact, after reading the later articles, consider revisiting this part.

Consider the following diagram of a "mesh" distributed system. This mesh could be any type of system such as a mesh of Ziti Routers, or maybe it is a system of sensors on an airplane. What they do does not matter. What matters is that this system has multiple pieces of software connecting amongst themselves. Consider what it means to accomplish this using public-key cryptography.

Mesh

In the diagram above, each system needs:

  • a key pair for client and server connections
  • to have the public keys of each system it is connecting to

So what do we need to do? Drop into a CLI and start generating keys on each machine. Do that by using these commands:

openssl ecparam -name secp256k1-genkey -param_enc explicit -out private-key.pem
openssl req -new -x509 -key private-key.pem -out server.pem -days 360

Voila - you now have a self-signed certificate! What is a self-signed certificate? For now, let us understand it means that no other system has expressed trust in your public certificate. In Part 4: Certificate Authorities & Chains Of Trust we will cover them in more detail.

You can repeat the above process for every piece of software in your mesh network. Preferably, you log into each machine and generate the private key there. Moving private keys on and off devices is a security risk and frowned upon. For maximum security, hardware, such as Hardware Security Modules (HSMs) and Trusted Platform Modules (TPMs), can be used to store the private keys in a manner that does not make them directly accessible.

Then you will need to copy each public certificate to every other machine and configure your software so that it trusts that certificate. The system will need to repeat this process any time the system adds a piece of software. If a machine is compromised, the analogous public certificate will need to be untrusted on every node in the mesh. Adding or removing trust in a public certificate involves configuring software or operating systems. There are many ways it can be implemented, including configuration files, files stored in specific directories, and even via configuration tools such as Windows Certificate Manager snap-in.

This is a log of careful work to get a simple system running. Consider what this means when adding or removing many nodes? Visiting each machine and reconfiguring them each time is quite a bit of overhead. There is a solution to these woes. While it is elegant on its own, it does add complexity. Let us see how Certificate Authorities (CAs) can help! In the next section, we will hit the highlights of CAs. For more detail, look forward to Part 4: Certificate Authorities & Chains Of Trust.

CAs & Adding Complexity

A CA enables trust deferral from multiple individual certificates to a single certificate which means that instead of trusting each certificate, each piece of software will trust the CA. The CA will be used to sign every public certificate our software pieces need to use. How does "signing" work? We will cover that in part three and why it matters part in four. For now, the basics will be provided.

Here are the high-level steps of using a CA:

  1. create a CA configuration via OpenSSL CNF files
  2. create the CA
  3. use the CA's public key to sign all of the public certificates
  4. distribute the CA's certificate to every machine
  5. configure the machines certificate store or configure the software

For items one and two, the process can be a bit mystical. There is a multitude of options involved in managing a CA. To perform number three, you will need to go through the processing of creating certificate signing requests (CSRs, see parts three for more detail) on behalf of the piece of software, and someone or something will have to play the role of the CA and resolve the CSRs. The last two steps will depend on the operating system and software being used.

All of these actions can be done via a CLI or programmatically. You will have to spend time and energy, making sure the options are correctly set and learning about all the different capabilities and extensions. Mistakes will inevitably occur. It is time-consuming to debug why a specific public certificate is not working as intended. The tools and systems that use the certificates are purposely vague in error messages as not to reveal too much information to attackers.

The payoff for using CAs is having the ability to create chains of trust. Chains of trust allow distributed systems to scale without having to reconfigure each node every time the system grows or shrinks. With a little more upfront cost and bookkeeping to run a CA, the system will greatly decrease the amount of configuration required on each device.

Further Concerns

Once configured, there are still other concerns that need to be taken into account. Consider the following list of events that may happen to a CA, and it's certificates:

  • What happens when a certificate expires?
  • How does a system know not to trust a certificate anymore?
  • What happens when private keys need to regenerate?

CAs do not automatically handle the propagation of these types of events. CAs are files on a storage device or HSM. Issuing or revoking certificates does not generate any kind of event without additional software. There is also the issue of certificates expiring. That "-days 360", used in the example above, puts a lifetime on each certificate. The lifetime can be extended far into the future, but this is a bad practice. Limiting the life span of a certificate reduces attack windows and can be used as a trigger to adopt strong encryption.

Even if we ignore all of those concerns, who did we trust to get this system setup? What was the seed of trust used to bootstrap trust? So far, you could have imagined that a human was doing all of this work. In that case, a human operator is trusted to properly configure all of the systems - trusting them with access to all of the private keys. The seed of trust is in that human. If this is a software system performing these actions, that means that the system has to be trusted and most likely have access to every other system coming online. That is workable, but what happens when your system can have external systems request to be added to the network? How can that be handled? How do you trust that system in the first place? Using a secret password creates a single, exploitable, weak point. Public-key cryptography could be put in place, but then we are in a chicken-and-egg scenario. We are putting public-key cryptography in place to automate public-key cryptography.

There are many caveats to bootstrapping trust. In a dynamic distributed system where pieces of software can come and go at the whim of network operators, the issues become a mountain of concerns. Thankfully in Ziti, a mechanism is provided that abstracts all of these issues. To understand how Ziti accomplishes this, we have a few more topics to discuss. In part two, we will chip away at those topics by covering public-key cryptography in more detail to understand its powers and applications.


Written By: Andrew Martinez
June 2020

· 8 min read
Andrew Martinez

Part 2: A Primer On Public-Key Cryptography

If you have read through the entire series up to here, welcome! If you have not, please consider reading the whole series:

It isn't easy to talk about bootstrapping trust without covering the basics of public-key cryptography. The reader may skip this article if the concepts of encryption, signing, and public/private keys are familiar. However, if not, I implore that you bear the brunt of this article as later parts will heavily rely on it.

If you wish, you can dive into the mathematics behind it to prove it to yourself, but I promise, no math here. When necessary, I will wave my hands at it, point into the distance, and let the reader journey out.

Keys

Keys are blobs of data containing rather large numbers. They can be stored anywhere data can be stored, but are commonly stored as files. A set of public and private keys is referred to as a "key set" or "key pair."

Within a key pair, there is only one private key and one public key. The two keys are mathematically entangled, given a particular function and its parameters. Today, those functions and parameters are generally elliptical curves and are the basis of a "trapdoor function." Trapdoor functions are attractive to the cryptographically inclined for two main reasons:

  1. they make it easy to encrypt with one key of a key pair and decrypt with the other.
  2. one key cannot be derived from the other

Of the two keys, the private key is the most important. It must be kept tucked away from prying eyes and attackers. Some secure environments store the private key in hardware such as Hardware Security Modules (HSMs) or Trusted Platform Modules (TPMs). Mobile devices, such as laptops and smartphones, use hardware technology similar to TPMs. Apple has its Secure Enclave, and Android has its Keymaster Hardware Abstraction Layer. The goal of all of these pieces of hardware is to keep sensitive secrets (e.g., private keys) safe. The fact that an entire industry of embedded hardware has been developed to keep private keys safe should tip the reader off to how important they are.

As stated above, these two keys have some impressive capabilities. It is not possible to derive one from the other. This allows the public key to be handed out freely without compromising the private key. Also, both keys can generate encrypted data that only the other key can decrypt. More clearly:

  1. Anyone with the public key can encrypt data only the private key holder can decrypt
  2. Anyone with the public key can decrypt data from the private key holder

Number one can succinctly be called "Public Key Encryption" and number two "Private Key Encryption." This article explores the merits of both.

Public Key Encryption

From the list above, number one is what most people think of as "encryption." It is "secure" as it allows anyone with the widely available public key to send messages only the private key holder can read. This property ensures that communication from the public key holder to the private key holder is being read exclusively by the intended target.

There is quite a bit of pressure to keep the private key extremely safe. Whoever holds the private key, has a guaranteed identity that is tied to and verifiable by the public key. It is verifiable because if one can use the public key to encrypt data, only the private key holder can decrypt it. This fact means that data can be encrypted and sent that coordinates on an additional secret. Since only the private key holder can decrypt the data to see this second level secret, future communication can use the new secret to encrypted and verify traffic in both directions. This additional exchange is roughly how part of the TLS negotiation works for HTTPs. TLS, and by proxy HTTPS, use other technologies and strategies to provide an incredible security proposition.

Private Key Encryption

For private key encryption, the same principles apply as with public key encryption with the roles reversed. The private key encrypts data only the public key can decrypt. On the surface, this seems absurd. When the server encrypts data with its private key, the public key can decrypt it. The public key is not protected and expected to be widely available. It seems as if private key encryption is nearly useless as everyone can read it!

Except it isn't. Private key encryption verifies the identity of the private key holder. The public key cannot interact with anyone else. Additionally, this property allows us to generate encrypted data that could only have come from the private key holder. If that data happens to be small and describe another document, we call that a "digital signature" or "signature" for short.

Digital Signatures

Digital signatures are similar to handwritten ones used to sign legal documents and checkbooks, but with a significant advantage. They validate that a document has not been altered since it was signed. With today's computer's graphical abilities, the nefarious can forge images and handwritten signatures. That puts handwritten signatures at a significant disadvantage. So how does this work?

The data that will be signed can be anything. What it represents is not important. It can be text, JSON, an image, a PDF, or anything at all! That data is processed by a one-way cryptographic hashing algorithm, such as SHA-256. This process is idempotent, meaning running it repeatedly on the same data, parameters, and hashing algorithm gives the same result. The output of this process is a hash, a string of characters that uniquely identifies the input data. With sufficiently large input data, the hash is much shorter than the input data as the hash size is usually fixed length.

For example, here is the Ziti logo:

Ziti

This logo's file can be hashed using SHA-256 via the sha256sum command commonly found on Linux.

$> sha256sum ziti.png
c3a6681cc81f9c0fa44b3e2921495882c55f0a86c54cd60ee0fdc7d200ad26db ziti.png

That long string "c3a....6db" is the hash of that file! The string is 64 characters long and is comprised of hex characters (a base 16 numbering system of 0-9 and a-f). Each character takes four bits to represent (4^2 = 16). Since there are 64 characters at 4 bits each we have: 64 x 4 = 256. This is where SHA-256 gets its name. SHA-256 is a fixed-length cryptographic hashing algorithm who's output is 256 bits in length.

The hash itself is not encryption. It is "hashing." Hashing of this nature is not reversible while encryption is. For cryptographic hashing, it is impracticable to have two similar sets of data that have the same function that produces the same hash. In essence, the hash uniquely represents the data: all of it! Changing even a single character would generate a different hash.

After hashing a data or document, the private key holder can encrypt the hash to generate a signature. This process provides the following truths when working with the signature:

  • the private key is the only key capable of producing its signature of the data's hash
  • the public key can validate the signature given the data and hashing algorithm used

Verifying a signature a straightforward process:

  • Use the public key to decrypt the signature to reveal the original hash
  • Use the hashing algorithm that was used initially on the data, recreate the hash independently
  • Compare the two hashes, and if they are the same the signature is valid

Signing data is incredibly powerful. It allows a private key holder to state that data was approved by them and not altered. It is also publicly verifiable to anyone with the document, signature, and public key. This allows many decentralized approaches to sharing data that can have its source and content verified.

Bearer tokens are an example of the power of signatures. Bearer tokens are a document that is signed by a trusted authentication system and contain data that provides information about the client presenting the token. Signing the token ensures that the content of the token has not been changed and has been endorsed by a trusted system. An example of a bearer token is a JSON Web Token (JWT)

A JWT specifies the format of the bearer token as a header, payload, and signature using JSON. A client can then present a JWT to any system which can then verify that the contents are valid and from a trusted identity. As long as the signature is valid, the JWT can grant access to the client presenting it based on whatever information is inside the JWT.

Closing

This article should have shed light on public-key cryptography by explaining the roles of the public and private keys. It should have also provided a glimpse at the power of encryption and digital signatures. In part three we will see how key pairs can be combined with certificates!


Written By: Andrew Martinez
June 2020

· 7 min read
Andrew Martinez

Part 3: Certificates

If you have read through the entire series up to here, welcome! If you have not, please consider reading the whole series:

In the series, we have covered public-key cryptography, where we learned about public keys, private keys, and their uses for encryption and signing. Using keys to sign data will play an essential role in this article. It is vital that the reader understand that signatures verify both the content of the data and its source. For a refresher, see part two of this series.

This article covers how certificates and certificate authorities (CAs) work as "trust anchors." When a CA is a trust anchor, it means that a system can trust the CA to sign certificates that it can, in turn, trust. Throughout this entire article, "trusting certificates" is mentioned. Trusting a certificate (CA or otherwise) is a software or operating system configuration process. This configuration tells the system that the specified certificates are trustworthy in the eyes of the operator.

Certificates

Part two of this series covered keys, both public and private, but did not mention certificates. It is common to hear "certificate" used interchangeably with "public key" and, sometimes, "private key." A certificate must have the public key inside of it. Some storage formats allow certificates to be stored along with the matching private key. One example of this is PFX files. PFX files, which are PKCS#12 archives, are also sometimes generically referred to as a "certificates". In this article "certificate" will always mean an x509 certificate that contains only the public key.

Certificates are a simple concept, but years of expansions and extensions have added to them and can be daunting uninitiated when you get into the nitty-gritty details. This article will strive to sit above that detail. If you venture into the realm of generating certificates, using OpenSSL and its configuration files, it can be a cumbersome experience to wade through. There are many great articles and tutorials available to get you started.

For this article, the word "certificate" will mean an "x509 Certificate". x509 is a public standard and is the de-facto standard for software systems dealing with public-key cryptography. There are other formats, but they are usually environment-specific, such as Card Verifiable Certificates. x509 good enough for general purpose use on most systems.

So, what is a certificate? It is yet another blob of data that is specially formatted. It can be stored anywhere data can be stored but is usually a file. For this conversation, we will focus on the following subset of information that a certificate contains:

  • Subject information
    • A public key
    • Distinguished Name
  • Issuer Information
  • Validity Period
  • Usage/Extensions
  • Signatures

Subject Information

Certificates contain more than keys. The Distinguished Name (DN) are text fields. They are useful mainly to humans to know what/who owns a certificate. It is sometimes used by software as display information or for comparison checks. Since humans provide the DN values or configure software with values, it is not always distinguishing. DN values have an alternate name: relatively distinguished names.

Related to the Subject DN is the Issuer Information. The Issuer Information is the subject of the certificate that issued the certificate. Because of this, both the issuer information has similar values to the subject DN. Both can include the following information:

  • CN - common name - a name
  • SN - surname
  • SERIALNUMBER - a number that is usually unique per certificate issuer, but not always
  • C - country
  • L - locality name
  • ST or S - state or province
  • STREET - street address
  • O - organization name
  • OU - organizational unit
  • T - title
  • G or GN - given name
  • E - email address
  • UID - user id
  • DC - domain component

Do not worry about memorizing that list. Simply knowing they exist and that they may or may not matter is good enough for now. If the reader is wondering when they might matter, well, that is generally when the system you are using complains about them.

Validity Period

The Validity Period specifies two points in time from when the certificate is valid. Before and after this window of time, the certificate is invalid and should not be trusted. Validity periods should be as small as possible to fit their use case. Shorter periods reduce the window of time that compromised private key can remain useful for an attack. The cost of this is overhead reissuing certificates as they reach the end of their validity period.

Usage/Extensions

Usage/Extension Data is interesting because it can limit what roles a certificate fulfills. Depending on the system, this may be adhered to or not. Some examples of usage that are common to see:

  • key usage: client authentication, server authentication, signatures, etc.
  • Subject Alternate Names (SANs)
    • Limits what IP address, email address, domain name, etc. the certificate can be associated with
  • Certificate Authority (CA) flag
  • and more

This series will not dive into the details of these usages. However, it is essential to be aware of them and that they can affect the roles a certificate can fulfill.

Signatures

The signature section of a certificate is a list of signatures from other entities that trust this certificate. A certificate that signs itself is a "self-signed certificate." Self-signed certificates must be individually trusted as no other certificate has expressed trust in it by signing it. Self-signed certificates are sometimes used for testing purposes as they are easy to create. They are also used as Root Certificate Authorities (root CAs).

Each signature on a certificate is the result of taking the contents of the certificate (without signatures), one-way hashing it, and then encrypting the hash with the signator's private key. The result is appended to the end of the signature list. During this process, the public certificate moves between systems to be signed.

The movement of the public certificate between systems is facilitated by Certificate Signing Requests (CSRs). CSRs can be transmitted electronically as files or as a data stream to the signer. CSRs contain only the public information of a certificate and a signature from the certificate's private key. Since CSRs only contain public information, they are not considered sensitive. The signature in a CSR allows the signer to verify that the CSR is from the subject specified in the CSR. If the signature is valid, the signator processes the CSR, and the result is a newly minted certificate with an additional signature.

Conclusion

Certificates are keys, usually public ones, with additional metadata that adds conventions and restrictions around certificate usages. They provide a place for signatures to resides and, through CSRs, provide a vehicle to request additional signatures. Certificates are useful because they package all of these concerns into a neat single file. In part four, we will explore how to create a formidable chain of trust by linking multiple certificates together.


Written By: Andrew Martinez
June 2020

· 8 min read
Andrew Martinez

Part 4: Certificate Authorities & Chains of Trust

If you have read through the entire series up to here, welcome! If you have not, please consider reading the whole series:

This article makes implicit heavy use of part 2 and part 3 of this series.

Root & Intermediate Certificate Authorities (CAs)

Not all certificates are the same! Certificates have different capabilities depending on their usage attributes and extensions. The previous article in this series mentioned a few of those attributes and extensions. Two of those were clientAuth, for client certificates, and serverAuth, for server certificates, which play an essential role in how a certificate is used during network authentication negotiations. These roles are crucial, as they are a contract for what attributes and extensions should be included in the certificate to make it useful. For example, a server certificate usually finds it useful to include Subject Alternate Names (SANs). A SAN can be used to tie a certificate to a specific domain name (like ziti.dev). However, a client certificate will not have use for those same fields.

The roles of certificates and the attributes/extensions they have are not always strictly followed. Some systems, such as web browsers, require SANs on a server certificate. That wasn't always the case. Before that, the Common Name field in the subject information contained the domain name. Some systems still rely on that convention.

Another type of certificate is a Certificate Authority (CA) certificate. A CA is a key pair with a certificate that has a unique purpose: to sign other certificates. CA certificates have a special CA flag set to "true." This flag alone does not grant the CA certificate any power, but if a system trusts that CA, it then allows that system to trust any certificate that CA has signed. As mentioned in previous parts of this series, trusting a CA is a software or operating system configuration process. This process can be done in multiple ways depending on the system: adding it to a store, a specific folder, or adding lines to a configuration file.

Your operating system, right now, has its own set of trusted CAs. Most operating systems come with a default list installed and maintained by your OS developer. Over time this list is added to and removed from as trust is granted or withdrawn. Some pieces of software come with a list of CAs that are used instead of or in addition to the OS's CAs. The power of a CA comes not by its creation but by it being trusted.

CAs come in two flavors: Root CAs and Intermediate CAs. Root CAs are the egg or the chicken (depending on your viewpoint) of the CA trust chicken-and-egg problem. Trust for CAs has to start somewhere. With CAs, it is the Root CA. A Root CA can sign certificates that are themselves CAs as well. Those certificates represent Intermediate CAs. Layers of CAs starting with a root and adding intermediates along the way allows the private key for the Root CAs to be kept in a highly secure environment, which is not convenient to use for signing. This security means that the Root CA has a far less likely chance of having its private key compromised. Intermediate CAs are put into less secure environments and, if compromised, can be revoked. Trust is usually put into the Root CA, and since it was not compromised can remain trusted. Compromised intermediate CAs can be blacklisted.

Running a public CA is serious business if you wish to be publicly trusted. The organizations running a CA have to have strict protocols that verify the security and safe handling of the CAs private keys. If the private key is compromised, it can be used to sign other certificates for malicious intents. Any system that trusted the compromised CA will now trust any maliciously signed certificates. This will compromise all certificates signed by that CA.

Public CAs are maintained by organizations such as DigiCert, Let's Encrypt, and others. Anyone can create private CAs. The only difference is that the number of systems that trust a private CA is much smaller than that of a public one. CAs are a cornerstone of bootstrapping trust. Trusting the proper CAs can grant trust to a large number of systems.

Chains of Trust & PKIs

Part three of this series introduced that certificates self-sign or sign another certificate. Certificates are usually signed via Certificate Signing Requests (CSRs). A certificate signing itself is called a "self-signed certificate" and is an indicator of it being a root CA if the CA flag is also set to true. A root CA can sign other certificates that also have the CA flag set to true. Those types of certificates are intermediate CAs. Any CA, root or intermediate, that fulfills a CSR and signs the enclosed certificate will generate a non-CA certificate as long as the CA flag is false. These certificates are "leaf certificates."

The term Public Key Infrastructure (PKI) is used to describe all of the outputs that are generated when a CA is created. That includes the root, intermediates, and leaf certificates. It also optionally includes all of the systems, processes, procedures, and data used to manage them. For the purpose of this article, and simplicity, let us stick to the certificates only.

Consider the following PKI setup:

  • Two root CAs:
    • Root A
    • Root B
  • The root CAs each sign an intermediate CA via CSR:
    • Intermediate A
    • Intermediate B
  • A server wishes to have a certificate to have Root A's trust extended to it.
    • A key pair is generated
  • A CSR is created and submitted to Intermediate A to sign
  • The CSR is fulfilled.
    • Server Cert A is created and signed by Intermediate A

Visually this would appear as follows:

Chains

This PKI has two chains of trust: Chain A and Chain B. They are called chains because the signatures link the certificates together. Root A has signed Intermediate A's certificate and Intermediate A has signed Server A's certificate. Programmatically we can traverse these signatures and verify them using the public certificates of each signatory. Trusting Root A will trust Server A.

The second chain, Chain B, does not sign any of the certificates on Chain A. As expected, Trusting either of the CAs from Chain B does not grant any trust to the certificates on Chain A. Chain B highlights the fact that any system may have multiple chains of trust that do not interact in any fashion.

Returning to Chain A, trusting Intermediate A designates it as a "trust anchor." Any certificate can be a trust anchor. The certificate used as a trust anchor determines which certificates will additionally be trusted. A leaf certificate as a trust anchor only trusts that one certificate. Trusting a CA trusts all certificates that it has signed itself or any of its intermediates. In the diagram above, trust only flow downward.

  • Trusting Server Cert A will only trust that one server certificate
  • Trusting Intermediate A will trust Server Cert A and any other certificate it signs
  • Trust Root A will trust Intermediate A and Server Cert A and any other certificate Root A signs (intermediate CA or not) and in turn, any of the certificates they sign

Trusting a CA that has signed many certificates allows public certificate trust to scale. This is how trust scales for web traffic. Companies like DigiCert, IdenTrust, GoDaddy, etc. have their root CA or one of their large intermediate CAs trusted. Those CAs sign certificates for websites. All of our devices trust those website certificates because the CA has signed them.

Distributed Systems & CAs

The goal for any private distributed system should be to have certificates verified on both sides: clients verify servers and vice versa. This behavior is a tenant of Zero Trust - do not trust, verify. Verification should be done on every connection before any data exchange. Over TLS, which secures HTTPS, this would be "mutual TLS" or "mTLS." Most public websites do not require mTLS. Instead, they use TLS with the client validating the server. For public web traffic, the server wishes to be trusted widely. The reverse is not necessary. If it is, websites use an additional form of authentications, like usernames and passwords, to verify the client's identity. Public key cryptography is a stronger authentication mechanism, but it is also difficult for the general public to set up, manage, and maintain.

The same is true for distributed systems. Most don't secure anything at all or only verify servers. It is inherently insecure and can cause issues depending on the setup of the system. Ziti is a distributed system that abstracts away this security setup for both its internal routers and client SDKs. This setup allows application-specific networking with strong identity verification, powerful policy management, flexible mesh routing, and more. The goal of this series is to focus on bootstrapping trust. So in the last article we will come full circle and see how all of this relates to bootstrapping trust for Zero Trust networks.


Written By: Andrew Martinez
June 2020

· 6 min read
Andrew Martinez

Part 5 Bootstrapping Trust

If you have read through the entire series up to here, welcome! If you have not, please consider reading the whole series:

Ziti

In this series of articles, we are exploring bootstrapping trust, what that means, and how it enables Zero Trust security methodologies. Ziti provides a method to bootstrap trust via its enrollment process. For Ziti, the enrollment process is bootstrapping trust. This trust must be in place as all connections in Ziti require verification. All identities in Ziti have a key pair that identifies that individual. The enrollment process abstracts the steps of setting up keys, certificates, CSRs, CAs, and deploying them to the proper locations. In addition, the Ziti SDKs can be embedded within any application and enroll with a Ziti network in the exact same fashion to bootstrap trust as part of Ziti's Zero Trust model.

Ziti has a concept called the "Edge." The Edge is a set of software features that sit on top of the "Fabric." The Fabric is the core of each Ziti component, and it provides long haul mesh routing while the Edge focuses on enrolling Ziti components, managing access via policies, and maintaining the trust necessary to provide the foundation of a Zero Trust network without the hassle of setting it up yourself. Together they are a powerful combination of optimized long haul routing and trust management.

Fabric Edge

A small scale example Ziti system appears as follows:

Ziti System

Ziti Edge has the concepts of identities for endpoint SDKs and routers. Both require certificates signed by a trusted CA. Ziti can generate the PKI necessary to manage that trust. The PKI and its CAs will form the backbone of the trust system that Ziti will deploy for you. In the system diagram above, the Ziti Controller will manage an intermediate CA and a secure enrollment process that will bootstrap trust for each router and SDK. After bootstrapping trust, the controller will maintain data to manage the entire life cycle of the certificates it generates. This life cycle encompasses all the concerns from part one of this series, including bootstrapping, revoking, renewing, and rotating keys.

So let us review the components a Ziti Controller must have to function:

  1. A CA (intermediate preferred)
  2. A server certificate generated for the Controller's IP/hostname/etc. Signed by the CA or a public CA
  3. A Ziti Controller configured and ready to run

This article series has touched on items one and two, but not three. For information on how to configure a Ziti Controller refer to the Ziti documentation repository on Github. You will also find details on how to use the Ziti CLI to generate the PKI necessary to start a Ziti network. However, here is a simple command that will help get the controller started.

 ziti pki create ca test1
ziti pki create server --dns myserver.com

Enrollment

Once a Ziti Controller is up and running, it is possible to create a new identity and enroll it. Behind the scenes, many things happen, but for now, let us focus on what an administrator would have to perform.

  1. Authenticate via the Ziti CLI, Ziti Admin Console (ZAC), or Edge REST API
  2. Issue a request to create a new identity for an SDK or router
  3. Receive an enrollment JWT Use the JWT on the enrolling device/server to enroll

In those steps, we have performed many complex interactions.

  • The enrolling identity:
    • validated the enrollment JWT cryptographically
    • validated the Ziti Controller as a suitable trust anchor cryptographically
    • bootstrapped its trust pool of CAs as additional trust anchors over a secure connection
    • generated a key pair
    • generated a CSR
  • The controller has:
    • asserted its identity cryptographically
    • asserted the validity of the enrolling identity
    • provided a CA store of trust anchors
    • fulfilled the CSR request for the identity

All of these items are performed making no assumptions and securely verifying each step. This process does not suffer from man-in-the-middle attacks. It provides many benefits! Below is a detailed image of each step of the enrollment process.

Enrollment Full

Let's break those steps down:

  1. Via the Ziti CLI, ZAC, or Edge REST API the admin authenticates and requests to create an identity
  2. The admin receives a JWT that is signed by the controller and is cryptographically verifiable. The JWT contains all the information for the enrolling device/server to contact the controller and verify its identity. It also includes a secret enrollment token.
  3. The JWT is given to the enrolling device
  4. The device parses the JWT, verifies all the information is present to enroll
  5. The device retrieves the public certificate from the controller at the address specified in the JWT
  6. The device confirms that the server is, in fact, the owner of the private key for that certificate
  7. The device uses the retrieved certificate to verify the signature on the JWT
  8. Verifies content has not changed
  9. Verifies the issuing server is the server it is communicating with
  10. Makes a secure connection to the server and requests the CAs to trust
  11. The enrolling identity generates a key pair, if necessary, and a CSR. The CSR is submitted in a request with the JWT's enrollment token.
  12. The controller verifies the CSR, verifies the enrollment token, verifies the client connection, and then returns the necessary signed certificates.

At the end of the process, which took four simple human steps, but numerous cryptographically secure software steps, the controller now has a record of the certificates issued to a specific identity. That identity now has certificates that can be used to make connections to other enrolled Ziti components. All components in the system can verify the identity of any other Ziti component. At every step, every link is verified. No individual piece of software blindly trusts any other for inbound or outbound connections. Trust has been successfully bootstrapped! Now we enter a maintenance window where trust has to be verified continuously and maintained. The enrolled identity can now interact with the Ziti Controller to either function as a Ziti Router or as Zero Trust network client.

Conclusion

Thank you for reading this far! If you completed the entire series, I hope it has been helpful. Zero Trust is a complicated topic, and it requires a serious foundation in bootstrapping trust to get right. Hopefully, this series starts you on your way. If you have time, please checkout (Ziti)[https://github.com/openziti]! It is the Zero Trust network overlay solution that I have personally worked on and was the inspiration for this series.


Written By: Andrew Martinez
June 2020

· 10 min read
Paul Lorenz

Introduction

As we (the OpenZiti team) progressed on our Go journey, we've stumbled on various obstacles, settled on some best practices and hopefully gotten better at writing Go code. This document is meant to share some of the 'Aha!' moments where we overcame stumbling blocks and found solutions that sparked joy. This is intended both for new team members and for anyone in the go community who might be interested. We'd be very happy to hear from others about their own 'aha' moments and also how the solutions presented strike your sensibilities.

Channels

Channels are a core feature of go. As is typical of go, the channel API is small and simple, but provides a lot of power.

If you haven't read it yet, Dave Cheney's Channel Axioms is worth a look.

Closing Channels

Closing channels can be complicated. On the reader side things are generally uncomplicated. A closed channel read will return immediately with the zero value and flag indicating that it is closed.

func main() {
ch := make(chan interface{}, 1)
ch <- "hello"
val, ok := <- ch
fmt.Printf("%v, %v\n", val, ok) // prints hello, true
close(ch)
val, ok = <- ch
fmt.Printf("%v, %v\n", val, ok) // prints <nil>, false
}

On the writer side, things can be more complicated. If you only have a single writer, it can be responsible for closing the channel. This notifies any blocker readers that the channel is closed. However, if there are multiple writers, this won't work. Writing to a closed channel will cause a panic. Closing an already closed channel will also cause a panic. So, what do we do?

The main thing is to realize that we don't have to close the channel. We only have to make sure the readers and writers are safely notified that they should stop trying to use the channel. For this, we can use a second channel.

package main

import (
"github.com/openziti/foundation/util/concurrenz"
"github.com/pkg/errors"
)

type Queue struct {
ch chan int
closeNotify chan struct{}
closed concurrenz.AtomicBoolean
}

func (self *Queue) Push(val int) error {
select {
case self.ch <- val:
return nil
case <-self.closeNotify:
return errors.New("queue closed")
}
}

func (self *Queue) Pop() (int, error) {
select {
case val := <-self.ch:
return val, nil
case <-self.closeNotify:
return 0, errors.New("queue closed")
}
}

func (self *Queue) Close() {
if self.closed.CompareAndSwap(false, true) {
close(self.closeNotify)
}
}

If there are several entities which all need to shutdown together, they can even share a closeNotify channel.

A variation on this would let readers drain the channel once it's closed. Because select case evaluation is random, we may not read a val from the channel once the close notify channel is closed. We can ensure that we return a value if it's available by modifying Pop() as follows:

func (self *Queue) Pop() (int, error) {
select {
case val := <-self.ch:
return val, nil
case <-self.closeNotify:
select {
case val := <-self.ch:
return val, nil
default:
return 0, errors.New("queue closed")
}
}
}

Places used:

Other Channel Uses

Let's look at how we can use channels in a few other ways.

Semaphores and Pools

Because channels have a sized buffer and have well defined blocking behavior, creating a semaphore implementation is very straightforward. We can create a channel with a buffer of the size we want our semaphore to have. We can then read and write from the channel to acquire and release the semaphore.

package concurrenz

import "time"

type Semaphore interface {
Acquire()
AcquireWithTimeout(t time.Duration) bool
TryAcquire() bool
Release() bool
}

func NewSemaphore(size int) Semaphore {
result := &semaphoreImpl{
c: make(chan struct{}, size),
}
for result.Release() {
}
return result
}

type semaphoreImpl struct {
c chan struct{}
}

func (self *semaphoreImpl) Acquire() {
<-self.c
}

func (self *semaphoreImpl) AcquireWithTimeout(t time.Duration) bool {
select {
case <-self.c:
return true
case <-time.After(t):
return false
}
}

func (self *semaphoreImpl) TryAcquire() bool {
select {
case <-self.c:
return true
default:
return false
}
}

func (self *semaphoreImpl) Release() bool {
select {
case self.c <- struct{}{}:
return true
default:
return false
}
}

We could use mostly the same implementation for a resource pool. Instead of a channel of struct{}, we could have a channel of connections or buffers that are acquired and released.

Signal

We can use channels as signals. In this example we have something running periodically, but we want to be able to trigger it to run sooner. With a single element channel, we can notify a goroutine. By using select with default, we can ensure that signalling code doesn't block and that the receiving side only gets a single signal per loop.

package main

import (
"fmt"
"github.com/openziti/foundation/util/concurrenz"
"time"
)

func NewWorker() *Worker {
w := &Worker{
signal: make(chan struct{}, 1),
}
go w.run()
return w
}

type Worker struct {
signal chan struct{}
stopped concurrenz.AtomicBoolean
}

func (self *Worker) run() {
ticker := time.NewTicker(time.Minute)
defer ticker.Stop()

for !self.stopped.Get() {
select {
case <-ticker.C:
self.work()
case <-self.signal:
self.work()
}
}
}

func (self *Worker) work() {
if !self.stopped.Get() {
fmt.Println("working hard")
}
}

func (self *Worker) RunNow() {
select {
case self.signal <- struct{}{}:
default:
}
}

Channel Loops and Event Handler

We often have a loop which is processing inputs from one or channel. Often we have a set of data we want to keep local to a single goroutine, so we don't have to use any synchronization or worry about cpu cache effects. We use channels to feed data to the goroutine and/or to trigger different kinds of processing. A for with select loop can handle channels of different types. YOu can have a channel per type of work, or per type of data. Sometimes it can be convenient to consolidate things on a single channel, using an event API.

Here's a simple example where the processor is maintaining some cached data which can be updated externally. Presumably the processor would be doing something with the cached data, but we've left that out to focus on the pattern itself.

type Event interface {
// events are passed the processor so they don't each have to include it
Handle(*Processor)
}

type Processor struct {
ch chan Event
closeNotify chan struct{}
cache map[string]string
}

func (self *Processor) run() {
for {
select {
case event := <-self.ch:
event.Handle(self)
case <-self.closeNotify:
return
}
}
}

func (self *Processor) queueEvent(evt Event) {
select {
case self.ch <- evt:
case <-self.closeNotify:
return
}
}

func (self *Processor) UpdateCache(k, v string) {
self.queueEvent(&updateCache{key: k, value: v})
}

func (self *Processor) Invalidate(k string) {
self.queueEvent(invalidate(k))
}

type updateCache struct {
key string
value string
}

func (self *updateCache) Handle(p *Processor) {
p.cache[self.key] = self.value
}

type invalidate string

func (self invalidate) Handle(p *Processor) {
delete(p.cache, string(self))
}

Type Aliases

As we demonstrated in the previous example we can alias a type and add functions to it, usually to satisfy some interface.

type invalidate string

func (self invalidate) Handle(p *Processor) {
delete(p.cache, string(self))
}

This can be useful if we only have a single piece of data. Rather than wrapping it in a struct, we can just alias it and add our own funcs.

The main downside to this approach is that you have to unalias the data inside your functions which can lead to code that is less clear. See for example this method from an AtomicBoolean implementation:

type AtomicBoolean int32

func (ab *AtomicBoolean) Set(val bool) {
atomic.StoreInt32((*int32)(ab), boolToInt(val))
}

Function Type Aliases

A go feature which can surprise developers is the ability to add function definitions to funcs. The Event API in the Processor example above could be extended as follows:

type Event interface {
Handle(*Processor)
}

type EventF func(*Processor)

func (self EventF) Handle(p *Processor) {
self(p)
}

The Invalidate code could now be written as:

func (self *Processor) Invalidate(k string) {
self.queueEvent(EventF(func(processor *Processor) {
delete(processor.cache, k)
}))
}

The need for an EventF cast could be removed by adding a helper function.

func (self *Processor) queueEventF(evt EventF) {
self.queueEvent(evt)
}

func (self *Processor) UpdateCache(k, v string) {
self.queueEventF(func(processor *Processor) {
processor.cache[k] = v
})
}

I first encountered this style in the go http library where handlers can be defined as structs implementing Handler or as functions matching HandlerFunc. This is most useful when you may have both heavy implementations which carry a lot of state as well as very simple implementations which make more sense as a function.

The processor event channel could also be implemented in terms of pure functions, if all event implementations are lightweight.

Interfaces

A golang limitation that often trips people up is that packages cannot have circular dependencies. There are a few ways to work around this, but the most common is to introduce interfaces in the more independent of the packages.

Errors

In some situations, go's error handling can be excessively verbose. Especially in cases where you're doing a series of I/O operations, your code can look something like:

func WriteExample(w io.Writer) error {
if _, err := w.Write([]byte("one")); err != nil {
return err
}
if _, err := w.Write([]byte("two")); err != nil {
return err
}
if _, err := w.Write([]byte("three")); err != nil {
return err
}
if _, err := w.Write([]byte("four")); err != nil {
return err
}
return nil
}

One way to clean this up is to wrap the error in the operation and only check it at the end.

type WriterErr struct {
err error
w io.Writer
}

func (self *WriterErr) Write(s string) {
if self.err == nil {
_, self.err = self.w.Write([]byte(s))
}
}

func (self *WriterErr) Error() error {
return self.err
}

func WriteExample2(w io.Writer) error {
writer := &WriterErr{w: w}
writer.Write("one")
writer.Write("two")
writer.Write("three")
writer.Write("four")
return writer.Error()
}

See also:

Note: This pattern is could be viewed as an error monad implementation

Gotchas

Loop Variablescam

Like many other languages, it's possible to get into trouble when capturing loop variables, both via pointer references and via closures.

The following snippet will print out world world since the loop variable remains constant throughout loop iteration.

func main() {
var list []*string
for _, v := range []string {"hello", "world"} {
list = append(list, &v)
}
for _, v := range list {
fmt.Printf("%v ", *v)
}
fmt.Println()
}

Similarly, the following will output second second:

func main() {
for _, v := range []string {"first", "second"} {
go func() {
time.Sleep(100 * time.Millisecond)
fmt.Printf("%v ", v)
}()
}
time.Sleep(200 *time.Millisecond)
fmt.Println()
}

Common Deadlock Causes

Non-reentrant Mutexes

Unlike in some other languages, the mutexes provide in the sync package are non-reentrant. So if your code grabs a lock and ends up calling back into something which gets the same lock, the goroutine will deadlock. Typically, if you have to call back in, you'd either need an indicator that the lock is already acquired, or do the work in a new go-routine, depending on how independent the second access was.

Channel Deadlocks

If you have a goroutine processing events from a channel, if the event submits an event back onto the channel, that can cause a deadlock, if the channel is not buffered, or if the buffer is full.

Fixes include:

  • Running the next event in-line, if you can detect that you're already in the event processing context
  • Ensure the channel is buffer is big enough that it will never block
  • Handing the new event submission off to a new go-routine

One benefit to keeping your channel buffers at zero, is that you will detect these deadlocks very quickly. If you have a small buffer, then the deadlock may not be caught until the system is under load.


  1. Suggested by Cameron Otts

· 2 min read
Clint Dovholuk
# ------------- start docker 
docker-compose up

# access the docker controller to create the necessary overlay
docker exec -it docker_ziti-controller_1 bash

# ------------- log into the ziti cli
zitiLogin

# ------------- make at least one router to be public
ziti edge update edge-router ziti-edge-router -a "public"

# ------------- allow all identities to use any edge router with the attribute "public"
ziti edge delete edge-router-policy all-endpoints-public-routers
ziti edge create edge-router-policy all-endpoints-public-routers --edge-router-roles "#public" --identity-roles "#all"

# ------------- allows all edge-routers to access all services
ziti edge delete service-edge-router-policy all-routers-all-services
ziti edge create service-edge-router-policy all-routers-all-services --edge-router-roles "#all" --service-roles "#all"

ziti edge delete identity zititunneller-blue
ziti edge create identity device zititunneller-blue -o blue.jwt
ziti edge enroll blue.jwt

# ------------- create a client - probably won't commit
ziti edge create identity device zdewclint -o zdewclint.jwt

# from outside docker:
docker cp docker_ziti-controller_1:/openziti/zdewclint.jwt /mnt/v/temp/


# attach a wholly different docker container with NET_ADMIN priv
# so we can make a tun and provide access to the __blue__ network
docker run --cap-add=NET_ADMIN --device /dev/net/tun --name ziti-tunneler-blue --user root --network docker_zitiblue -v docker_ziti-fs:/openziti --rm -it openziti/quickstart /bin/bash

# ------------- zititunneller-blue
apt install wget unzip
wget https://github.com/openziti/ziti-tunnel-sdk-c/releases/latest/download/ziti-edge-tunnel-Linux_x86_64.zip
unzip ziti-edge-tunnel-Linux_x86_64.zip
clear
./ziti-edge-tunnel run -i blue.json


ziti edge delete config "basic.dial"
ziti edge create config "basic.dial" intercept.v1 '{"protocols":["tcp"],"addresses":["simple.web.test"], "portRanges":[{"low":80, "high":80}]}'

ziti edge delete config "basic.bind"
ziti edge create config "basic.bind" host.v1 '{"protocol":"tcp", "address":"web-test-blue","port":8000}'

ziti edge delete service "basic.web.test.service"
ziti edge create service "basic.web.test.service" --configs "basic.bind,basic.dial"

ziti edge delete service-policy basic.web.test.service.bind.blue
ziti edge create service-policy basic.web.test.service.bind.blue Bind --service-roles "@basic.web.test.service" --identity-roles "@zititunneller-blue"

ziti edge delete service-policy basic.web.test.service.dial.zdew
ziti edge create service-policy basic.web.test.service.dial.zdew Dial --service-roles "@basic.web.test.service" --identity-roles "@zdewclint"




ziti edge delete config "wildcard.dial"
ziti edge create config "wildcard.dial" intercept.v1 '{"protocols":["tcp"],"addresses":["*.blue"], "portRanges":[{"low":8000, "high":8000}]}'

ziti edge delete config "wildcard.bind"
ziti edge create config "wildcard.bind" host.v1 '{"forwardProtocol":true, "allowedProtocols":["tcp","udp"], "forwardAddress":true, "allowedAddresses":["*.blue"], "forwardPort":true, "allowedPortRanges":[ {"low":1,"high":32768}] }'

ziti edge delete service "wildcard.web.test.service"
ziti edge create service "wildcard.web.test.service" --configs "wildcard.bind,wildcard.dial"

ziti edge delete service-policy wildcard.web.test.service.bind.blue
ziti edge create service-policy wildcard.web.test.service.bind.blue Bind --service-roles "@wildcard.web.test.service" --identity-roles "@zititunneller-blue"

ziti edge delete service-policy wildcard.web.test.service.dial.zdew
ziti edge create service-policy wildcard.web.test.service.dial.zdew Dial --service-roles "@wildcard.web.test.service" --identity-roles "@zdewclint"

· 2 min read
Clint Dovholuk

"Zitification" or "zitifying" is the act of taking an application and incorporating a Ziti SDK into that application. Once an application has a Ziti SDK incorporated into it, that application can now access network resources securely from anywhere in the world provided that the computer has internet access: NO VPN NEEDED, NO ADDITIONAL SOFTWARE NEEDED.

Integrating a Ziti SDK into your application and enrolling the application itself into a Ziti Network provides you with tremendous additional security. An application using a Ziti Network configured with a truly zero-trust mindset will be IMMUNE to the "expand/multiply" phases of classic ransomware attacks. As recent events have shown, it's probably not a case of if your application will be attacked, but when.

In these posts we're going to explore how common applications can be "zitified". The first application we are going to focus on will be ssh and it's corollary scp. At first, you might think, "why even bother" zitifying (of all things) ssh and scp? These applications are vital to system administration, and we have been using ssh and scp "safely" on the internet for years. Hopefully you're now interested enough to find out in the first post: zitifying ssh

If you'd prefer to read about other zitifications, a running list of zitified apps will be updated below: