As your Kubernetes cluster grows, maintaining and enforcing configuration policies becomes a daunting task for Platform Engineers or Kubernetes Administrator. Ensuring practices such as avoiding the use of latest tags for containers, requiring specific labels, and permitting only containers from authorized registries can be challenging. Traditional approaches, like incorporating tests in your CI/CD pipeline, are valuable, but we all know the rush of development sometimes leads to overlooking these policies. With that said do not remove the tests in your CI/CD pipeline, you want to detect misoconfigurations as early as possble. Gatekeeper helps detect misconfirations inside your cluster and does it with admission controller. Admission controllers are deployed along side the api-server and can reject or warn about objects being created/updated before it is stored in etcd.
As always you can follow along and check out the source code or clone it here .
Deploy Gatekeeper
Let’s create a local cluster using Kind.
|
|
Download the Gatekeeper bundle from Github (version v3.13.0 used in this example) and deploy it.
|
|
Check for the gatekeeper-system
namespace and the four running pods inside of it.
|
|
If you are interested you can take a look at the CRDs we have created during the install. We will soon use them to create our own policies.
|
|
Create Policies
Here comes the fun part. Gatekeeper allows you to add constraints to your cluster. It uses a CRD called ConstraintTemplate
. This is where you define the policies using Rego, a language specifically made to add policies to configs. This is how you define a constraint to require certain labels in rego. It might look overwhelming at first but as soon as you start playing around with it you will get the hang of it, and add your own policies in no time. However, many example can be found on gatekeepers website
.
package k8srequiredlabels
get_message(parameters, _default) = msg {
not parameters.message
msg := _default
}
get_message(parameters, _default) = msg {
msg := parameters.message
}
violation[{"msg": msg, "details": {"missing_labels": missing}}] {
provided := {label | input.review.object.metadata.labels[label]}
required := {label | label := input.parameters.labels[_].key}
missing := required - provided
count(missing) > 0
def_msg := sprintf("you must provide labels: %v", [missing])
msg := get_message(input.parameters, def_msg)
}
violation[{"msg": msg}] {
value := input.review.object.metadata.labels[key]
expected := input.parameters.labels[_]
expected.key == key
# do not match if allowedRegex is not defined, or is an empty string
expected.allowedRegex != ""
not re_match(expected.allowedRegex, value)
def_msg := sprintf("Label <%v: %v> does not satisfy allowed regex: %v", [key, value, expected.allowedRegex])
msg := get_message(input.parameters, def_msg)
}
Let’s apply the constraint templates.
|
|
Now, deploy the actual constraints. In the following example, we enforce the label app.kubernetes.io/name
for specific kinds in the dev
namespace. You see how easy it is to adjust the constraints once the template is deployed, and many constraints can be created from the same template.
|
|
Feel free to adjust the parameters to match your workload and then apply them.
|
|
Also, take a look at the other constraint we just deployed as well to see a more complicated policy e.g. the one for required tags. Finally, let’s see if the policies work. I have prepared three different deployments of nginx:
- A valid deployment with correct image tag and labels.
- An invalid deployment with an image tag that uses the
latest
tag. - An invalid deployment missing the required labels.
Let’s deploy and see what happens.
|
|
Above should produce following errors for deployments nginx-not-valid-image-tag
and nginx-not-valid-labels
:
|
|
You should see the valid nginx deployment running in the dev
namespace.
|
|
There you have it! You are now in control of what resources kubernetes will handle and you control your workload so it follows the same policies. By combining its admission control capabilities with custom constraints, you can ensure your policies are consistently enforced, preventing misconfiguration in your development pipeline.
Experiment with Gatekeeper, refine your policies, and make your Kubernetes cluster compliant.
That’s it! Thank you for reading!