A Deployment is a Kubernetes application that does not retain data or state while running. Each pod of the same Deployment is identical, allowing for seamless creation, deletion, and replacement without impacting the application functionality. Deployments are ideal for stateless applications, such as web front-end servers and microservices, which do not require data storage. They enable easy lifecycle management of applications, including updates, rollbacks, and scaling.
Parameter | Description |
|---|---|
Workload Type | Select Deployment. For details about different workload types, see Workload Overview. |
Workload Name | Enter a name for the workload. Enter 1 to 63 characters starting with a lowercase letter and ending with a lowercase letter or digit. Only lowercase letters, digits, and hyphens (-) are allowed. |
Namespace | Select a namespace for the workload. The default value is default. You can also click Create Namespace to create one. For details, see Creating a Namespace. |
Pods | Enter the number of workload pods. |
Container Runtime | A CCE standard cluster uses a common runtime by default, whereas a CCE Turbo cluster supports both common and secure runtimes. For details about their differences, see Secure Runtime and Common Runtime. |
Time Zone Synchronization | Configure whether to enable time zone synchronization. After this function is enabled, the container and node will share the same time zone. Time zone synchronization relies on the local disk mounted to the container. Do not modify or delete the local disk. For details, see Configuring Time Zone Synchronization. |
If you configured multiple containers for a pod, ensure that the ports used by each container do not conflict with each other, or the workload cannot be deployed.
Parameter | Description |
|---|---|
Container Name | Enter a name for the container. |
Pull Policy | Image update or pull policy. If you select Always, the image is pulled from the image repository each time. If you do not select Always, the existing image of the node is preferentially used. If the image does not exist, the image is pulled from the image repository. |
Image Name | Click Select Image and select the image used by the container. To use a third-party image, directly enter image path. Ensure that the image access credential can be used to access the image repository. For details, see Using Third-Party Images. |
Image Tag | Select the image tag to be deployed. |
CPU Quota |
If Request and Limit are not specified, the quota is not limited. For more information and suggestions about Request and Limit, see Configuring Container Specifications. |
Memory Quota |
If Request and Limit are not specified, the quota is not limited. For more information and suggestions about Request and Limit, see Configuring Container Specifications. |
(Optional) GPU Quota | Configurable only when the cluster contains GPU nodes and the CCE AI Suite (NVIDIA GPU) add-on has been installed.
For details about how to use GPUs in a cluster, see Default GPU Scheduling in Kubernetes. |
(Optional) Privileged Container | Programs in a privileged container have certain privileges. If this option is enabled, the container will be assigned privileges. For example, privileged containers can manipulate network devices on the host machine, modify kernel parameters, access all devices on the node. For more information, see Pod Security Standards. |
(Optional) Init Container | Whether to use the container as an init container. An init container does not support health check. An init container is a special container that runs before other app containers in a pod are started. Each pod can contain multiple containers. In addition, a pod can contain one or more init containers. Application containers in a pod are started and run only after the running of all init containers completes. For details, see Init Containers. |
(Optional) Run Option | Add run options for the container. For details, see Pod. CCE supports the following run options:
|
If the workload contains more than one pod, EVS volumes cannot be mounted.
To disable the collection of the standard output logs of the current workload, add the annotation kubernetes.AOM.log.stdout: [] in Labels and Annotations in the Advanced Settings area. For details about how to use this annotation, see Table 1.
A Service provides external access for pods. With a static IP address, a Service forwards access traffic to pods and automatically balances load for these pods.
You can also create a Service after creating a workload. For details about Services of different types, see Service Overview.
Parameter | Description |
|---|---|
Upgrade | Specify the upgrade mode and parameters of the workload. Rolling upgrade and Replace upgrade are available. For details, see Upgrading and Rolling Back a Workload. |
Scheduling | Configure affinity and anti-affinity policies for flexible workload scheduling. Load affinity and node affinity are provided.
|
Toleration | Using both taints and tolerations allows (not forcibly) the pod to be scheduled to a node with the matching taints, and controls the pod eviction policies after the node where the pod is located is tainted. For details, see Configuring Tolerance Policies. |
Labels and Annotations | Add labels or annotations for pods using key-value pairs. After the setting, click Confirm. For details about labels and annotations, see Configuring Labels and Annotations. |
DNS | Configure a separate DNS policy for the workload. For details, see DNS Configuration. |
Network Configuration |
|
Nginx is used as an example to describe how to create a workload using kubectl.
vi nginx-deployment.yaml
Below is example of the file. For details about the Deployment configuration, see the Kubernetes official documentation.
apiVersion: apps/v1kind: Deployment # Workload typemetadata:name: nginx # Workload namenamespace: default # Namespace where the workload is locatedspec:replicas: 1 # Number of pods in the specified workloadselector:matchLabels: # The workload manages pods based on the pod labels in the label selector.app: nginxtemplate: # Pod configurationmetadata:labels: # Pod labelsapp: nginxspec:containers:- image: nginx:latest # Specify a container image. If you use an image in My ImagesMy Images, obtain the image path from SWR.imagePullPolicy: Always # Image pull policyname: nginx # Container nameresources: # Node resources allocated to the containerrequests: # Requested resourcescpu: 250mmemory: 512Milimits: # Resource limitcpu: 250mmemory: 512MiimagePullSecrets: # Secret for image pull- name: default-secret
kubectl create -f nginx-deployment.yaml
If information similar to the following is displayed, the Deployment is being created:
deployment.apps/nginx created
kubectl get deployment
If the Deployment is in the Running, it means that the Deployment has been created.
NAME READY UP-TO-DATE AVAILABLE AGEnginx 1/1 1 1 4m5s
Parameters