In Kubernetes, a Service makes an application running on a set of pods network-accessible. It provides a consistent DNS name for these pods and distributes traffic across them for load balancing. This section describes the basic concepts of Kubernetes Services and provides a comparison of various Service types.
Service Types
You can create a specified type of Service in a cluster. The description and application scenarios of different types of Services are listed in the table below.
Service Type | Description | Application Scenario |
|---|---|---|
ClusterIP Services are the default type of Kubernetes Services. They assign a virtual IP address, accessible only with the cluster, from the cluster's Service CIDR block. | Services that only need to communicate with each other within a cluster and do not need to be exposed outside the cluster. For example, if a frontend application pod in a cluster needs to access a backend database in the same cluster, you can create a ClusterIP Service for the backend database to ensure that the backend database can be accessed only within the cluster. | |
A NodePort Service opens a port on each node in a cluster to allow external traffic to access the Service through <node-IP-address>:<node-port>. | Scenarios where temporary or low-traffic access is required. For example, in a testing environment, you can use a NodePort Service when deploying and debugging a web application. | |
A LoadBalancer Service adds an external load balancer on the top of a NodePort Service and distributes external traffic to multiple pods within a cluster. It automatically assigns an external IP address to allow clients to access the Service through this IP address. LoadBalancer Services process TCP and UDP traffic at Layer 4 (transport layer) of the OSI model. They can be extended to support Layer 7 (application layer) capabilities to manage HTTP and HTTPS traffic. | Cloud applications that require a stable, easy-to-manage entry for external access. For example, in a production environment, you can use LoadBalancer Services to expose public-facing services such as web applications and API services to the Internet. These services often need to handle high volumes of external traffic while maintaining high availability. | |
A DNAT Service provides Network Address Translation (NAT) for all nodes in a cluster so that multiple nodes can share an EIP. | Services that require temporary or low-volume access from the Internet. DNAT Services provide higher reliability than NodePort Services. With a DNAT Service, there is no need to bind an EIP to a single node and requests can still be distributed to the workload even any of the nodes inside is down. | |
For headless Services, no cluster IP address is allocated. When a client attempts to access a backend pod and performs a DNS query for the Service, it receives a list of the IP addresses of the backend pods. This allows the client to communicate directly with individual pods. | Applications that require direct communication with specific backend pods instead of using proxies or load balancers. For example, when you deploy a stateful application (such as a ClickHouse database), you can use a headless Service. It allows application pods to directly access each ClickHouse pod and enables targeted read and write operations. This enhances overall data processing efficiency. |
Why a Service Fail to Be Accessed from Within the Cluster
If the service affinity of a Service is set to the node level, that is, the value of externalTrafficPolicy is Local, the Service may fail to be accessed from within the cluster (specifically, nodes or containers). Information similar to the following is displayed:
upstream connect error or disconnect/reset before headers. reset reason: connection failureOrcurl: (7) Failed to connect to 192.168.10.36 port 900: Connection refused
It is common that a load balancer in a cluster cannot be accessed. The reason is as follows: When Kubernetes creates a Service, kube-proxy adds the access address of the load balancer as an external IP address (External-IP, as shown in the following command output) to iptables or IPVS. If a client inside the cluster initiates a request to access the load balancer, the address is considered as the external IP address of the Service, and the request is directly forwarded by kube-proxy without passing through the load balancer outside the cluster.
# kubectl get svc nginxNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEnginx LoadBalancer 10.247.76.156 123.**.**.**,192.168.0.133 80:32146/TCP 37s
When the value of externalTrafficPolicy is Local, the access failures in different container network models and service forwarding modes are as follows:
- For a multi-pod workload, ensure that all pods are accessible. Otherwise, there is a possibility that the access to the workload fails.
- In a CCE Turbo cluster that utilizes Cloud Native Network 2.0, node-level affinity is supported only when the Service backend is connected to a hostNetwork pod.
- The table lists only the scenarios where the access may fail. Other scenarios that are not listed in the table indicate that the access is normal.
Service Type Released on the Server | Access Type | Request Initiation Location on the Client | Tunnel Network Cluster (IPVS) | VPC Network Cluster (IPVS) | Tunnel Network Cluster (iptables) | VPC Network Cluster (iptables) |
|---|---|---|---|---|---|---|
NodePort Service | Public/Private network | Same node as the service pod | Access the IP address and NodePort on the node where the server is located: The access is successful. Access the IP address and NodePort on a node other than the node where the server is located: The access failed. | Access the IP address and NodePort on the node where the server is located: The access is successful. Access the IP address and NodePort on a node other than the node where the server is located: The access failed. | Access the IP address and NodePort on the node where the server is located: The access is successful. Access the IP address and NodePort on a node other than the node where the server is located: The access failed. | Access the IP address and NodePort on the node where the server is located: The access is successful. Access the IP address and NodePort on a node other than the node where the server is located: The access failed. |
Different nodes from the service pod | Access the IP address and NodePort on the node where the server is located: The access is successful. Access the IP address and NodePort on a node other than the node where the server is located: The access failed. | Access the IP address and NodePort on the node where the server is located: The access is successful. Access the IP address and NodePort on a node other than the node where the server is located: The access failed. | The access is successful. | The access is successful. | ||
Other containers on the same node as the service pod | Access the IP address and NodePort on the node where the server is located: The access is successful. Access the IP address and NodePort on a node other than the node where the server is located: The access failed. | The access failed. | Access the IP address and NodePort on the node where the server is located: The access is successful. Access the IP address and NodePort on a node other than the node where the server is located: The access failed. | The access failed. | ||
Other containers on different nodes from the service pod | Access the IP address and NodePort on the node where the server is located: The access is successful. Access the IP address and NodePort on a node other than the node where the server is located: The access failed. | Access the IP address and NodePort on the node where the server is located: The access is successful. Access the IP address and NodePort on a node other than the node where the server is located: The access failed. | Access the IP address and NodePort on the node where the server is located: The access is successful. Access the IP address and NodePort on a node other than the node where the server is located: The access failed. | Access the IP address and NodePort on the node where the server is located: The access is successful. Access the IP address and NodePort on a node other than the node where the server is located: The access failed. | ||
LoadBalancer Service using a shared load balancer | Private network | Same node as the service pod | The access failed. | The access failed. | The access failed. | The access failed. |
Other containers on the same node as the service pod | The access failed. | The access failed. | The access failed. | The access failed. | ||
DNAT gateway Service | Public network | Same node as the service pod | The access failed. | The access failed. | The access failed. | The access failed. |
Different nodes from the service pod | The access failed. | The access failed. | The access failed. | The access failed. | ||
Other containers on the same node as the service pod | The access failed. | The access failed. | The access failed. | The access failed. | ||
Other containers on different nodes from the service pod | The access failed. | The access failed. | The access failed. | The access failed. | ||
LoadBalancer Service using a Dedicated load balancer (Local) for interconnection with NGINX Ingress Controller | Private network | Same node as cceaddon-nginx-ingress-controller pod | The access failed. | The access failed. | The access failed. | The access failed. |
Other containers on the same node as the cceaddon-nginx-ingress-controller pod | The access failed. | The access failed. | The access failed. | The access failed. |
The following methods can be used to solve this problem:
- (Recommended) In the cluster, use the ClusterIP Service or service domain name for access.
- Set externalTrafficPolicy of the Service to Cluster, which means cluster-level service affinity. Note that this affects source address persistence.apiVersion: v1kind: Servicemetadata:annotations:kubernetes.io/elb.class: unionkubernetes.io/elb.autocreate: '{"type":"public","bandwidth_name":"cce-bandwidth","bandwidth_chargemode":"bandwidth","bandwidth_size":5,"bandwidth_sharetype":"PER","eip_type":"5_bgp","name":"james"}'labels:app: nginxname: nginxspec:externalTrafficPolicy: Clusterports:- name: service0port: 80protocol: TCPtargetPort: 80selector:app: nginxtype: LoadBalancer
- Leveraging the pass-through feature of the Service, kube-proxy is bypassed when the ELB address is used for access. The ELB load balancer is accessed first, and then the workload. For details, see Configuring Passthrough Networking for a LoadBalancer Service.Note
- In a CCE standard cluster, after passthrough networking is configured using a dedicated load balancer, the private IP address of the load balancer cannot be accessed from the node where the workload pod resides or other pods on the same node as the workload.
- Passthrough networking is not supported for clusters of v1.15 or earlier.
- In IPVS network mode, the passthrough settings of Services connected to the same load balancer must be the same.
- If node-level (local) service affinity is used, kubernetes.io/elb.pass-through is automatically set to onlyLocal to enable pass-through.
apiVersion: v1kind: Servicemetadata:annotations:kubernetes.io/elb.pass-through: "true"kubernetes.io/elb.class: unionkubernetes.io/elb.autocreate: '{"type":"public","bandwidth_name":"cce-bandwidth","bandwidth_chargemode":"bandwidth","bandwidth_size":5,"bandwidth_sharetype":"PER","eip_type":"5_bgp","name":"james"}'labels:app: nginxname: nginxspec:externalTrafficPolicy: Localports:- name: service0port: 80protocol: TCPtargetPort: 80selector:app: nginxtype: LoadBalancer
- Service
- Endpoint and EndpointSlice
- Request Forwarding (iptables and IPVS)
- Service Affinity (externalTrafficPolicy)
- Service Types
- Why a Service Fail to Be Accessed from Within the Cluster

