This section describes how to allocate data disk space to nodes so that you can configure the data disk space accordingly.
When creating a node, you can specify Data Disk Space Allocation in the expanded area of Data Disk.
If the sum of the container engine and container image space and the kubelet and emptyDir space is less than 100%, the remaining space will be allocated for user data. You can mount the storage volume to a service path. Do not leave the path empty or set it to a key OS path such as the root directory.
For nodes using a data disk shared between a container engine and kubelet components, the container storage Rootfs is of the OverlayFS type. For details about data disk space allocation, see Data Disk Shared Between a Container Engine and kubelet Components.
For a node using a non-shared data disk (100 GiB for example), the division of the disk space varies depending on the container storage Rootfs type Device Mapper or OverlayFS. For details about the container storage Rootfs corresponding to different OSs, see Mapping Between OS and Container Storage Rootfs.
By default, the container engine and image space, occupying 90% of the data disk, can be divided into the following two parts:
The thin pool is dynamically mounted. You can view it by running the lsblk command on a node, but not the df -h command.
Figure 1 Space allocation for container engines of Device Mapper

No separate thin pool. The entire container engine and container image space (90% of the data disk by default) are in the /var/lib/docker directory.
Figure 2 Space allocation for container engines of OverlayFS

The custom pod container space (basesize) is related to the node OS and container storage Rootfs. For details about the container storage Rootfs, see Mapping Between OS and Container Storage Rootfs.
When configuring basesize, consider the maximum number of pods allowed on one node. The container engine space should be greater than the total disk space used by containers (formula: Container engine space and container image space (90% by default) > Number of containers × basesize). Otherwise, the container engine space allocated to the node may be insufficient and the container cannot be started.
For nodes that support basesize, when Device Mapper is used, although you can limit the size of the /home directory of a single container (to 10 GiB by default), all containers on the node still share the thin pool of the node for storage. They are not completely isolated. When the sum of the thin pool space used by certain containers reaches the upper limit, other containers cannot run properly.
In addition, after a file is deleted in the /home directory of the container, the thin pool space occupied by the file is not released immediately. Therefore, even if basesize is set to 10 GiB, the thin pool space occupied by files keeps increasing until 10 GiB when files are created in the container. The space released after file deletion will be reused but after a while. If the number of containers on the node multiplied by basesize is greater than the thin pool space size of the node, there is a possibility that the thin pool space has been used up.
OS | Container Storage Rootfs | Custom Basesize |
|---|---|---|
CentOS 7.x | Clusters of v1.19.16 and earlier use Device Mapper. Clusters of v1.19.16 and later use OverlayFS. NOTE: When upgrading clusters of earlier versions, Device Mapper nodes will not automatically switch to OverlayFS. You need to manually reset these nodes. | Supported when Rootfs is set to Device Mapper and the runtime is Docker. The default value is 10 GiB. If Rootfs is set to OverlayFS, the basesize cannot be specified. |
EulerOS 2.5 | Device Mapper | Supported only when the runtime is Docker. The default value is 10 GiB. |
Ubuntu 18.04 | OverlayFS | Not supported |
Ubuntu 22.04 | OverlayFS | Not supported |
HCE OS 2.0 | OverlayFS | Supported only by Docker clusters of a version earlier than v1.23.14-r0, v1.25.9-r0, v1.27.6-r0, or v1.28.4-r0. There are no limits by default. Supported by both Docker and containerd clusters of v1.23.14-r0, v1.25.9-r0, v1.27.6-r0, v1.28.4-r0, or later. There are no limits by default. |
OS | Container Storage Rootfs | Custom Basesize |
|---|---|---|
CentOS 7.x | OverlayFS | Not supported |
Ubuntu 18.04 | OverlayFS | Not supported |
Ubuntu 22.04 | OverlayFS | Not supported |
HCE OS 2.0 | OverlayFS | Supported only by Docker clusters of a version earlier than v1.23.14-r0, v1.25.9-r0, v1.27.6-r0, or v1.28.4-r0. There are no limits by default. Supported by both Docker and containerd clusters of v1.23.14-r0, v1.25.9-r0, v1.27.6-r0, v1.28.4-r0, or later. There are no limits by default. |
When the container engine space is insufficient, image garbage collection is triggered.
The policy for garbage collecting images takes two factors into consideration: HighThresholdPercent and LowThresholdPercent. Disk usage exceeding the high threshold (default: 80%) will trigger garbage collection, which will delete least recently used images until the low threshold (default: 70%) is met.
Docker/containerd and kubelet components share the space of a data disk.
For nodes using a shared data disk, the container storage Rootfs is of the OverlayFS type. After such a node is created, the data disk space (for example, 100 GiB) will not be divided for the container engines, container images, and kubelet components. The data disk is mounted to /mnt/paas, and the storage space is divided using two file systems.
Figure 3 Allocating the storage space of a shared data disk
