Auto Scaling
Auto Scale-out Configuration
CCE Cluster Autoscaler comprehensively checks the resource statuses of an entire cluster. When the load of a microservice is high (for example, the CPU or memory usage is too high), it will add more pods to reduce the load.
Node Capacity Expansion Conditions
- Auto scale-out when the workload cannot be scheduled: If a workload pod cannot be scheduled, the system automatically scales out the node pool with auto scaling enabled. If the pod has been configured to be affinity for a node, the system will not automatically add more nodes.
Such auto scaling works with an HPA policy. For details, see Using HPA and CA for Auto Scaling of Workloads and Nodes.
- User-defined policy switch: specifies whether to automatically scale out a node pool based on the node scaling policies. This function is enabled by default.
Upper limit of resources to be expanded
- Total Nodes: specifies how many nodes can be present in a cluster during scale-out. If there are more nodes in the cluster than the specified value, the cluster will not add nodes. The default value is determined by how many nodes a cluster can manage at most.
- Total Cores: specifies how many cores on all nodes can be present in a cluster during scale-out. If there are more cores in the cluster than the specified value, the cluster will not add nodes. By default, the number is not limited.
- Total Memory (GiB): specifies the upper limit of the total memory of all nodes in a cluster during scale-out. If the total memory exceeds the specified value, the cluster will not add nodes. By default, the number is not limited.
When the total number of nodes, CPUs, and memory is collected, unavailable nodes in custom node pools are included but unavailable nodes in the default node pool are not included.
Scale-out Priority
You can drag and drop node pools in the list to adjust their scale-out priorities.
Auto Scale-In Configuration
CCE Cluster Autoscaler comprehensively checks the resource statuses of an entire cluster. Once it confirms that workload pods can be scheduled and run properly, it automatically obtains nodes for scale-in.
Node Scale-In Conditions
Nodes in a cluster comply with the default scale-in conditions by default. If scale-in conditions are customized for a node pool, the nodes in the node pool comply with the customized scale-in conditions.
Parameter | Description |
---|---|
Default Scale-In Conditions | If the CPU and memory allocation rates of a node are lower than a certain percentage (50% by default) for a period of time (10 minutes by default), or the node is unavailable for a period of time (20 minutes by default), the node will be scaled in. Allocation rate = Total requested resources of all pods/Allocatable resources on the node If the option Ignore the pre-allocated CPU and memory of the DaemonSet container is selected, CCE will not consider the CPU and memory resources pre-allocated to DaemonSet pods when determining whether to scale in cluster nodes. This means that the resources used by DaemonSet pods will not affect the scaling-in decision. If this option is not selected, the resources pre-allocated to DaemonSet pods will be included in the resource allocation calculations. This can cause the CPU and memory allocation rates to exceed the node scale-in threshold, potentially preventing nodes with low CPU and memory utilization from being scaled in. |
(Optional) Custom Scale-In Conditions | You can configure scale-in conditions for each node pool. If the CPU and memory allocation rates of nodes in a node pool are lower than a certain percentage (50% by default) for a period of time (10 minutes by default), the node pool will be scaled in. Custom scale-in conditions are supported when the CCE Cluster Autoscaler add-on version is 1.25.181, 1.27.152, 1.28.120, 1.29.81, 1.30.48, 1.31.10, or later. If auto scaling is not enabled for all specifications in a node pool, custom scale-in conditions configured for the node pool do not take effect. For details about how to enable auto scaling for a node pool, see Configuring Node Pool Scaling Policies. |
Scale-in Exception Scenarios | When a node meets the following exception scenarios, CCE will not scale in the node even if the node resources or status meets scale-in conditions:
|
Node Scale-in Policy
Item | Description | Default Value |
---|---|---|
Number of Concurrent Scale-In Requests | Maximum number of idle nodes that can be deleted concurrently. Only idle nodes can be concurrently scaled in. Nodes that are not idle can only be scaled in one by one. NOTE: During a node scale-in, if the pods on the node do not need to be evicted (such as DaemonSet pods), the node is idle. Otherwise, the node is not idle. | 10 |
Node Recheck Timeout | Interval at which a node can be checked again after it is determined that the node cannot be scaled in | 5 minutes |
Cooldown Time | Cooldown period for starting scale-in evaluation again after auto scale-in is triggered in a cluster NOTE: If both auto scale-out and scale-in exist in a cluster, set this parameter to 0 minutes. This prevents the node scale-in from being blocked due to continuous scale-out of some node pools or retries upon a scale-out failure, which results in unexpected waste of node resources. | 10 minutes |
Cooldown period for starting scale-in evaluation again after auto scale-out is triggered in a cluster | 10 minutes | |
Cooldown period for starting scale-in evaluation again after auto scale-in triggered in a cluster failed | 3 minutes |
- Auto Scale-out Configuration
- Auto Scale-In Configuration