nav-img
Advanced

DataPlane V2 Network Acceleration

DataPlane V2 can be enabled for clusters that use Cloud Native 2.0 networks. After this function is enabled, eBPF redirection will be enabled for the capability of network policies.

Note

CCE DataPlane V2 is released with restrictions. To use this feature, submit a service ticket to CCE.

DataPlane V2

Description

Technical implementation

DataPlane V2 integrates open-source Cilium to provide capabilities such as network policies.

Supported cluster versions

CCE Turbo clusters of v1.27.16-r10, v1.28.15-r0, v1.29.10-r0, v1.30.6-r0, or later

Usage

  • When creating a CCE Turbo cluster, select Cloud Native Network 2.0 and enable DataPlane V2.
NOTICE:
  • After DataPlane V2 is enabled, secure containers (Kata Containers as the container runtime) are not supported.
  • Enabled DataPlane V2 cannot be disabled.
  • DataPlane V2 can only be enabled for new clusters.
  • DataPlane V2 is in limited OBT. Upgrading it to a commercial version requires the node to be reset. Exercise caution when enabling this function.

Supported OS

HCE OS 2.0

Performance optimization

  • EDT is used to limit the egress bandwidth. This makes bandwidth limitation more accurate and resource consumption lower.

Bandwidth

After DataPlane V2 network acceleration is enabled, pods on HCE OS 2.0 use EDT to limit the egress bandwidth. The ingress bandwidth limitation is not supported. In other network modes, a TBF qdisc is used to limit the bandwidth. For details, see Configuring QoS for a Pod.

NetworkPolicy

  • The implementation of network policies is different from that of container tunnel networks. For details, see Configuring Network Policies to Restrict Pod Access.
    • The IPBlock selector can only select CIDR blocks outside a cluster.
    • The IPBlock selector does not have good support for the except keyword, so this keyword is not recommended.
    • If a network policy of the egress type is used, the pod fails to access the IP addresses of the hostNetwork pod and node in the cluster.

Resource consumption

The resident cilium-agent process on each node is responsible for eBPF network acceleration. Each cilium-agent process may occupy 80 MiB of memory. Each time a pod is added, the cilium-agent memory consumption may increase by 10 KiB.

Components

After DataPlane V2 is enabled, components listed in the following table are installed.

Component

Description

Resource Type

cilium-operator

  • Synchronizes CRDs.
  • Removes the node.cilium.io/agent-not-ready taint of a node.
  • Tunes and recycles internal resources.

Deployment

yangtse-cilium

  • Installs the auxiliary CNI (cilium-cni) for CCE to adapt to Cilium.
  • Deploys cilium-agent.

DaemonSet

Change History

Add-on Version

Cluster Version

New Feature

Community Version

1.08

v1.27

v1.28

v1.29

v1.30

v1.31

  • Added Cloud Native Network 2.0 for CCE Turbo clusters.
  • Disabled host-based firewalls (by setting enable-host-firewall=false).
  • Disabled L7 network policies (by setting enable-l7-proxy=false).

1.0.14

v1.27

v1.28

v1.29

v1.30

v1.31

  • Disabled bpf-lb-sock (by setting bpf-lb-sock=false).