Christmas from Kong? Container Control Planes!

Christmas came a few days early from the Kong family. The first release candidate of the Kong 2.0 release was pushed out yesterday. The biggest highlight of this release is the shiny new “hybrid mode” deployment, which provides a more formal separation of control plane (e.g., management of the Kong cluster) behavior from data plane (e.g., proxying user-facing traffic). I had some time to play with this and wanted to share a few extra thoughts beyond what’s in the release notes.

Disclaimer: I am a Kong contributor and employee. This blog post and my opinions are my own.

Getting up and running quickly with a Kong control plane and data plane is straightforward via Docker, though a bit of extra legwork is needed to get mTLS communication configured. First, we need a backing data store for the control plane to hold the state of the world:

And as with all Kong installations, the database needs to be bootstrapped:

Next, we need to setup the control plane. Before we can start the control plane process, we need to generate the keypair used for mTLS between control plane and data plane nodes. I say ‘keypair’ (singular), because the current RC release of Kong does not provide a formal PKI mechanism. The certificate and key are shared between control plane and data plane nodes. This provides for a fully functional mutual TLS handshake, but every control plane and data plane node need identical keypair material to communicate. There is a Kong CLI subcommand to generate an EC cert and key, but for setting up a Docker playground I found it was easier to just create my own:

The OpenSSL CLI command to generate the self-signed certificate will write the key file with restrictive permissions; this needs to be made readable for the user running Kong inside the Docker container, otherwise starting the control plane will fail with:

Once this is done, we can mount the key material into the container and start the control plane process:

Note that we expose port 8001 for Admin API traffic, and 8005 for control plane traffic. We can now start a data plane node and configure it to communicate with the control plane:

We rely on Docker’s network namespace DNS to forward control plane traffic from the data plane node, to the appropriate container. We’ve also disabled the database, as the data plane node will communicate only with the control plane for runtime configuration. When the data plane node comes up, it contacts the control plane and fetches the cluster configuration. This is noted in the logs:

We now can add a playground Service and Route to the cluster, and see the configuration results impacted on the data plane in real time:

Each update to the cluster runtime config is noted by the data plane:

Control plane nodes proactively push down content to the data plane. This provides for more responsive information dissemination than is currently provided by Kong’s polling-based communications mechanism in a traditional deployment.

Control plane nodes keep track of each connected data plane. A heartbeat is established between control plane and data plane nodes, allowing control planes to send smart diffs of the cluster configuration as necessary. We can see the update times tracked in the /clustering/status endpoint of the control plane Admin API:

So it looks like there’s a 30 second heartbeat ticker (heh).

Data planes were designed to be resilient against failure of the control plane. Once a data plane has received its cluster config, it can continue to proxy traffic even if the connection to the control plane is broken:

And of course, the data plane will whine at you until the connection is re-established:

Data plane nodes will store their copy of the cluster config on the file system, in Kong’s runtime prefix (by default /usr/local/kong). This means we could theoretically mount the prefix location to the host, and provide a path for data nodes to be robust against failure alongside a simultaneous failure of the control plane (e.g., new data plane containers can be launched, and use a previously-pushed copy of the cluster config, even if no control plane is available during the initialization phase). This is a huge improvement over the traditional deployment model for Kong, where the backing data store (Postgres or Cassandra) must be available when a Kong process starts.

There’s a lot to look forward to in the GA release of this series. I’m particularly excited about a support for a formal PKI mechanism in hybrid mode, as well as a lot of the other Christmas goodies announced by the development team. Check out both the release announcement, and the preview docs for hybrid mode.

Leave a Reply

Your email address will not be published. Required fields are marked *