Gatekeeper
Introduction
Gatekeeper is a customizable cloud native policy controller based on OPA. It helps enhance policy execution and governance and provides more security policy rules that comply with Kubernetes application scenarios in clusters.
Open-source community: https://github.com/open-policy-agent/gatekeeper
For how to use the add-on, see the Gatekeeper documentation.
Notes and Constraints
If you have deployed the community Gatekeeper in your cluster, uninstall it and then install the CCE Gatekeeper add-on. Otherwise, the add-on may fail to be installed.
Precautions
Gatekeeper's webhooks can impact the utilization of fundamental Kubernetes resources. It is crucial to use the webhooks for services and carefully assess the potential risks associated with the add-on.
Gatekeeper is an open-source add-on that CCE has selected, adapted, and integrated into its services. CCE offers comprehensive technical support, but is not responsible for any service disruptions caused by defects in the open-source software, nor does it provide compensation or additional services for such disruptions. It is highly recommended that users regularly upgrade their software to address any potential issues.
Installing the Add-on
- Log in to the CCE console and click the cluster name to access the cluster console. In the navigation pane, choose Add-ons, locate Gatekeeper on the right, and click Install.
- On the Install Add-on page, configure the specifications.
- Change the values of the configurations that you want to modify. For details, see the parameters in GitHub.
- Configure deployment policies for the add-on pods.Note
- Scheduling policies do not take effect on add-on pods of the DaemonSet type.
- When configuring multi-AZ deployment or node affinity, ensure that there are nodes meeting the scheduling policy and that resources are sufficient in the cluster. Otherwise, the add-on cannot run.
Table 1 Configurations for add-on scheduling Parameter
Description
Multi-AZ Deployment
- Preferred: Deployment pods of the add-on will be preferentially scheduled to nodes in different AZs. If all the nodes in the cluster are deployed in the same AZ, the pods will be scheduled to different nodes in that AZ.
- Equivalent mode: Deployment pods of the add-on are evenly scheduled to the nodes in the cluster in each AZ. If a new AZ is added, you are advised to increase add-on pods for cross-AZ HA deployment. With the Equivalent multi-AZ deployment, the difference between the number of add-on pods in different AZs will be less than or equal to 1. If resources in one of the AZs are insufficient, pods cannot be scheduled to that AZ.
- Forcible: Deployment pods of the add-on are forcibly scheduled to nodes in different AZs. There can be at most one pod in each AZ. If nodes in a cluster are not in different AZs, some add-on pods cannot run properly. If a node is faulty, add-on pods on it may fail to be migrated.
Node Affinity
- Not configured: Node affinity is disabled for the add-on.
- Specify node: Specify the nodes where the add-on is deployed. If you do not specify the nodes, the add-on will be randomly scheduled based on the default cluster scheduling policy.
- Specify node pool: Specify the node pool where the add-on is deployed. If you do not specify the node pools, the add-on will be randomly scheduled based on the default cluster scheduling policy.
- Customize affinity: Enter the labels of the nodes where the add-on is to be deployed for more flexible scheduling policies. If you do not specify node labels, the add-on will be randomly scheduled based on the default cluster scheduling policy.
If multiple custom affinity policies are configured, ensure that there are nodes that meet all the affinity policies in the cluster. Otherwise, the add-on cannot run.
Toleration
Using both taints and tolerations allows (not forcibly) the add-on Deployment to be scheduled to a node with the matching taints, and controls the Deployment eviction policies after the node where the Deployment is located is tainted.
The add-on adds the default tolerance policy for the node.kubernetes.io/not-ready and node.kubernetes.io/unreachable taints, respectively. The tolerance time window is 60s.
For details, see Configuring Tolerance Policies.
- Click Install.
Components
Component | Description | Resource Type |
---|---|---|
gatekeeper-audit | Provide audit-related information. | Deployment |
gatekeeper-controller-manager | Provide Gatekeeper webhooks to control Kubernetes resources based on custom policies. | Deployment |
How to Use the Add-on
The following shows how to use Gatekeeper to enforce a constraint that requires a pod created in a specific namespace to have a label called test-label. For details, see How to use Gatekeeper.
- Use kubectl to access the cluster. For details, see Accessing a Cluster Using kubectl.
- Create a test-gatekeeper namespace for testing:kubectl create ns test-gatekeeper
- Create a policy template for checking labels:kubectl apply -f - <<EOFapiVersion: templates.gatekeeper.sh/v1beta1kind: ConstraintTemplatemetadata:name: k8srequiredlabelsspec:crd:spec:names:kind: K8sRequiredLabelsvalidation:openAPIV3Schema:properties:labels:type: arrayitems:type: stringtargets:- target: admission.k8s.gatekeeper.shrego: |package k8srequiredlabelsviolation[{"msg": msg, "details": {"missing_labels": missing}}] {provided := {label | input.review.object.metadata.labels[label]}required := {label | label := input.parameters.labels[_]}missing := required - providedcount(missing) > 0msg := sprintf("you must provide labels: %v", [missing])}EOF
- Create a constraint for the preceding policy template. This constraint enforces the requirement for a pod created in the test-gatekeeper namespace to have the label test-label.kubectl apply -f - <<EOFapiVersion: constraints.gatekeeper.sh/v1beta1kind: K8sRequiredLabelsmetadata:name: pod-must-have-test-labelspec:match:kinds:- apiGroups: [""]kinds: ["Pod"]namespaces:- test-gatekeeperparameters:labels: ["test-label"]EOF
- Verify the constraint effect.
- Create a pod that does not have the label test-label in the test-gatekeeper namespace:kubectl -n test-gatekeeper run test-deny --image=nginx --restart=Never
Expected output:
Error from server (Forbidden): admission webhook "validation.gatekeeper.sh" denied the request: [pod-must-have-test-label] you must provide labels: {"test-label"}The pod that does not have the label test-label cannot be created in the test-gatekeeper namespace.
- Create a pod that has the label test-label in the test-gatekeeper namespace:kubectl -n test-gatekeeper run test -l test-label=test --image=nginx --restart=Never
Check the pod. The pod has been created.
kubectl get pod test -n test-gatekeeper
Based on the previous verification, Gatekeeper is used to enforce a constraint that requires a pod created in a specific namespace to have a label called test-label.
- Create a pod that does not have the label test-label in the test-gatekeeper namespace:
- Introduction
- Notes and Constraints
- Precautions
- Installing the Add-on
- Components
- How to Use the Add-on