By default, kubelet uses CFS quotas to enforce pod CPU limits. When a node runs many CPU-bound pods, the workload can move to different CPU cores depending on whether the pod is throttled and which CPU cores are available at scheduling time. Many workloads are not sensitive to this migration and work fine without any intervention. Some applications are CPU-sensitive. They are sensitive to:
If your workloads are sensitive to any of these items, you can use Kubernetes CPU management policies to allocate dedicated CPU cores (through CPU pinning) to these workloads. This will shorten scheduling latency and improve application performance. The CPU manager preferentially allocates resources on a socket and full physical cores to avoid interference.
A CPU management policy is specified by using kubelet --cpu-manager-policy. By default, Kubernetes supports the following policies:
The CPU management policy does not apply to ECS (PM) nodes in CCE Turbo clusters.
When creating a cluster, you can enable CPU management in the Advanced Settings area. This setting applies to the entire cluster, and cluster-level CPU management policies cannot be modified after the cluster creation. If CPU management is not enabled during cluster creation, the nodes in DefaultPool will not support changes to CPU management policies. To apply CPU management policies, you must create a custom node pool and configure the desired settings for its nodes.
You can configure a CPU management policy for a custom node pool. After the configuration, the kubelet parameter --cpu-manager-policy will be automatically modified on the nodes in the node pool.
The default node pool (DefaultPool) adheres to the cluster-level CPU management policies, and its CPU management policies cannot be modified.
Prerequisites:
You can use node affinity scheduling to schedule the configured pods to the nodes where the static policy is enabled. In this way, the pods can exclusively use the CPU resources.
Example YAML:
kind: DeploymentapiVersion: apps/v1metadata:name: testspec:replicas: 1selector:matchLabels:app: testtemplate:metadata:labels:app: testspec:containers:- name: container-1image: nginx:alpineresources:requests:cpu: 2 # The value must be an integer and must be the same as that in limits.memory: 2048Milimits:cpu: 2 # The value must be an integer and must be the same as that in requests.memory: 2048MiimagePullSecrets:- name: default-secret
Take a node with 8 vCPUs and 16 GiB of memory as an example. Deploy a workload whose CPU request and limit are both 2 on the node beforehand.
Log in to the node where the workload is running and check the /var/lib/kubelet/cpu_manager_state output.
cat /var/lib/kubelet/cpu_manager_state
Command output:
{"policyName":"static","defaultCpuSet":"0-1,4-7","entries":{"de14506d-0408-411f-bbb9-822866b58ae2":{"container-1":"2-3"}},"checksum":3744493798}