The VPC network model seamlessly combines VPC routing with the underlying network, making it ideal for high-performance scenarios. However, the maximum number of nodes allowed in a cluster is determined by the VPC route quota. In the VPC network model, container CIDR blocks are separate from node CIDR blocks. To allocate IP addresses to pods running on a node in a cluster, each node in the cluster is allocated with a pod IP range for a fixed number of IP addresses. The VPC network model outperforms the container tunnel network model in terms of performance because it does not have tunnel encapsulation overhead. When the VPC network model is used in a cluster, the routes between container CIDR blocks and VPC CIDR blocks are automatically configured in the VPC route table. This means that pods within the cluster can be accessed directly from cloud servers in the same VPC, even if they are outside the cluster.
Figure 1 VPC network model

In a cluster using the VPC network model, network communication paths are as follows:
Advantages
Similarly, if the VPC is accessible to other VPCs or data centers and the VPC route table includes routes to the container CIDR blocks, resources in other VPCs or data centers can directly communicate with containers in the cluster, provided there are no conflicts between the network CIDR blocks.
Disadvantages
A VPC network allocates pod IP addresses based on the rules below. The core rule is to pre-allocate pod CIDR blocks from the container CIDR block to nodes and then allocate IP addresses from the pod CIDR blocks to pods.
Figure 2 IP address management of the VPC network

Maximum number of nodes that can be created in the cluster using the VPC network = Number of IP addresses in the container CIDR block/Number of IP addresses in the CIDR block allocated to the node by the container CIDR block
For example, if the container CIDR block is 172.16.0.0/16, the number of IP addresses is 65,536. The mask of the container CIDR block allocated to a node is 25. That is, the number of pod IP addresses on each node is 128. Therefore, a maximum of 512 (65536/128) nodes can be created. The number of nodes that can be added to a cluster is also determined by the available IP addresses in the node subnet and the scale of the cluster. For details, see Recommendation for CIDR Block Planning.
As explained in Cluster Network Structure, there are three networks in a cluster: cluster network, container network, and Service network. When planning network addresses, consider the following:
Assume that a cluster contains 200 nodes and the network model is VPC network.
In this case, the number of available IP addresses in the selected subnet must be greater than 200. Otherwise, nodes cannot be created due to insufficient IP addresses.
The container CIDR block is 172.16.0.0/16, and the number of available IP addresses is 65,536. As described in Pod IP Address Management, the VPC network is allocated a CIDR block with a fixed size (using the mask to determine the maximum number of pod IP addresses allocated to each node). For example, if the upper limit is 128, the cluster supports a maximum of 512 (65536/128) nodes.
In this example, a cluster using the VPC network model is created, and the cluster contains one node.
On the VPC console, locate the VPC to which the cluster belongs and check the VPC route table.
You can find that CCE has created a custom route in the route table. This route has a destination address corresponding to the container CIDR block assigned to the node, and the next hop is directed towards the target node. In the example, the container CIDR block for the cluster is 172.16.0.0/16, with 128 pod IP addresses assigned to each node. Therefore, the node's container CIDR block is 172.16.0.0/25, providing a total of 128 pod IP addresses.
When a pod IP address is accessed, the VPC route will forward the traffic to the next-hop node that corresponds to the destination address. The following is an example:
Create the deployment.yaml file. The following shows an example:
kind: DeploymentapiVersion: apps/v1metadata:name: examplenamespace: defaultspec:replicas: 4selector:matchLabels:app: exampletemplate:metadata:labels:app: examplespec:containers:- name: container-0image: 'nginx:perl'imagePullSecrets:- name: default-secret
Create the workload.
kubectl apply -f deployment.yaml
kubectl get pod -owide
Command output:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESexample-86b9779494-l8qrw 1/1 Running 0 14s 172.16.0.6 192.168.0.99 <none> <none>example-86b9779494-svs8t 1/1 Running 0 14s 172.16.0.7 192.168.0.99 <none> <none>example-86b9779494-x8kl5 1/1 Running 0 14s 172.16.0.5 192.168.0.99 <none> <none>example-86b9779494-zt627 1/1 Running 0 14s 172.16.0.8 192.168.0.99 <none> <none>
kubectl exec -it example-86b9779494-l8qrw -- curl 172.16.0.7
If the following information is displayed, the workload can be accessed:
<!DOCTYPE html><html><head><title>Welcome to nginx!</title><style>body {width: 35em;margin: 0 auto;font-family: Tahoma, Verdana, Arial, sans-serif;}</style></head><body><h1>Welcome to nginx!</h1><p>If you see this page, the nginx web server is successfully installed andworking. Further configuration is required.</p><p>For online documentation and support please refer to<a href="http://nginx.org/">nginx.org</a>.<br/>Commercial support is available at<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p></body></html>