CCE allows you to set resource requirements and limits, such as CPU and RAM, for added containers during workload creation. Kubernetes also allows using YAML to set requirements of other resource types.
For CPU and Memory, the meanings of Request and Limit are as follows:
If a node has sufficient resources, the pod on this node can use more resources than requested, but no more than limited.
For example, if you set a pod's memory request to 1 GiB and its limit to 2 GiB, and the pod is scheduled on a node with 8 GiB of memory (with no other pods running), the pod can use more than 1 GiB of memory under heavy load, but its memory usage will be capped at 2 GiB. If a process in a container attempts to use more than 2 GiB resources, the system kernel attempts to terminate the process. As a result, an out of memory (OOM) error occurs.
When creating a workload, you are advised to set the upper and lower limits of CPU and memory resources. If the upper and lower resource limits are not set for a workload, a resource leak of this workload will make resources unavailable for other workloads deployed on the same node. In addition, workloads that do not have upper and lower resource limits cannot be accurately monitored.
In real-world scenarios, the recommended ratio of Request to Limit is about 1:1.5. For some sensitive services, the recommended ratio is 1:1. If the Request is too small and the Limit is too large, node resources are oversubscribed. During service peaks, the memory or CPU of a node may be used up. As a result, the node is unavailable.
Parameter | Description |
|---|---|
CPU request | Minimum number of CPU cores required by a container. Resources are scheduled for the container based on this value. The container can be scheduled to this node only when the total available CPU on the node is greater than or equal to the number of containerized CPU applications. |
CPU limit | Maximum number of CPU cores available for a container. |
Recommended configuration
Actual available CPU of a node ≥ Sum of CPU limits of all containers on the current node ≥ Sum of CPU requests of all containers on the current node. You can view the actual available CPUs of a node on the CCE console (Resource Management > Nodes > Allocatable).
Parameter | Description |
|---|---|
Memory request | Minimum amount of memory required by a container. Resources are scheduled for the container based on this value. The container can be scheduled to this node only when the total available memory on the node is greater than or equal to the number of containerized memory applications. |
Memory Limit | Maximum amount of memory available for a container. When the memory usage exceeds the configured memory limit, the instance may be restarted, which affects the normal use of the workload. |
Recommended configuration
Actual available memory of a node ≥ Sum of memory limits of all containers on the current node ≥ Sum of memory requests of all containers on the current node. You can view the actual available memory of a node on the CCE console (Resource Management > Nodes > Allocatable).
The allocatable resources are calculated based on the resource request value (Request), which indicates the upper limit of resources that can be requested by pods on this node, but does not indicate the actual available resources of the node (for details, see Example of CPU and Memory Quota Usage). The calculation formula is as follows:
Assume that a cluster contains a node with 4 CPU cores and 8 GiB of memory. Two pods (pod 1 and pod 2) have been deployed on the cluster. Pod 1 oversubscribes resources (that is Limit > Request). The specifications of the two pods are as follows.
Pod | CPU Request | CPU Limit | Memory Request | Memory Limit |
|---|---|---|---|---|
Pod 1 | 1 core | 2 cores | 1 GiB | 4 GiB |
Pod 2 | 2 cores | 2 cores | 2 GiB | 2 GiB |
The CPU and memory usage of the node is as follows:
In this case, the remaining 1 core and 5 GiB can be used by the next new pod.
If pod 1 is under heavy load during peak hours, it will use more CPUs and memory within the limit. Therefore, the actual allocatable resources are fewer than 1 core and 5 GiB.
Typically, nodes support local ephemeral storage, which is provided by locally mounted writable devices or RAM. EV does not ensure long-term data availability. Pods can use local EVs to buffer data and store logs, or mount emptyDir volumes to containers. For details, see Local ephemeral storage.
Kubernetes allows you to specify the requested value and limit value of ephemeral storage in container configurations to manage the local ephemeral storage. The following attributes can be configured for each container in a pod:
In the following example, a pod contains two containers. The requested value of each container for local ephemeral storage is 2 GiB, and the limit value is 4 GiB. Therefore, the requested value of the pod for local ephemeral storage is 4 GiB, the limit value is 8 GiB, and the emptyDir volume uses 500 MiB of the local ephemeral storage.
apiVersion: v1kind: Podmetadata:name: frontendspec:containers:- name: container-1image: <example_app_image>resources:requests:ephemeral-storage: "2Gi"limits:ephemeral-storage: "4Gi"volumeMounts:- name: ephemeralmountPath: "/tmp"- name: container-2image: <example_log_aggregator_image>resources:requests:ephemeral-storage: "2Gi"limits:ephemeral-storage: "4Gi"volumeMounts:- name: ephemeralmountPath: "/tmp"volumes:- name: ephemeralemptyDir:sizeLimit: 500Mi