Learn Kubernetes with k3s - part4 - Services
This post is the continuation of part 3.
Previously I have explained what Kubernetes services are. I have used kubectl
to reveal the services running on my cluster (in part2).
$ kubectl get services -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 7d1h
kube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 7d1h
kube-system metrics-server ClusterIP 10.43.22.2 <none> 443/TCP 7d1h
kube-system traefik LoadBalancer 10.43.158.9 192.168.1.53,192.168.1.54 80:30824/TCP,443:30872/TCP 7d1h
Let's explore the different types of Kubernetes services.
ClusterIP
Try connecting to the metrics-server ClusterIP service.
on tab 1, run:
$ kubectl port-forward svc/metrics-server 8443:443 -n kube-system
Forwarding from 127.0.0.1:8443 -> 10250
Forwarding from [::1]:8443 -> 10250
on tab 2:
$ curl -k https://localhost:8443/
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {},
"code": 403
}
403 Error! Nonetheless, the connection works.
ClusterIP: exposes the service on a cluster-internal IP. This type makes the service only reachable from within the cluster. This is the default service type.
On the cluster, you can connect to metrics-server directly by its hostname.
$ kubectl run curl --image=curlimages/curl --restart=Never --rm -it -- sh
If you don't see a command prompt, try pressing enter.
~ $ curl -k https://metrics-server.kube-system
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {},
"code": 403
}~ $ exit
pod "curl" deleted
LoadBalancer
We have tried connecting to the traefik LoadBalancer service in part2.
$ curl -k https://192.168.1.53
404 page not found
$ curl -k https://192.168.1.54
404 page not found
LoadBalancer: exposes the service externally using a load balancer. Traffic is routed to the pods of a cluster-internal service. The load balancer tracks the endpoints associated with the service and balances traffic accordingly.
k3s uses a loadbalancer solution called ServiceLB. It exposes the underlying service at selected nodes on a defined port range.
It appears we would need DNS to round-robin the external traffic to these nodes to complete the load-balancing act, which is a hassle. On a managed kubernetes cluster such as EKS, GKE, the loadbalancer will be provided by the cloud provider at a static IP.
NodePort
We don't have a NodePort service for now. Install Helm, we will use it to install nginx server as a NodePort service.
Helm is a package manager for Kubernetes that makes deploying and managing Kubernetes applications easy. It automates the creation, packaging, configuration, and deployment of Kubernetes applications.
A Helm chart is a package of pre-configured Kubernetes resources. They comprise of templates for the Kubernetes manifest files that are usually involved in deploying complicated distributed applications or services.
Create a file values.yaml
with the following values:
service:
type: NodePort
nodePorts:
http: 30080
You can check the parameters on the nginx helm chart page. Run the helm command as given on the page with an additional -f values.yaml
flag to use the values we just created.
$ helm install my-release \
oci://registry-1.docker.io/bitnamicharts/nginx \
-f values.yaml
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 8d
my-release-nginx NodePort 10.43.206.241 <none> 80:30080/TCP 9m30s
After that test the service with
$ curl http://192.168.1.53:30080
<!DOCTYPE html>
<html>
......
$ curl http://192.168.1.54:30080
<!DOCTYPE html>
<html>
......
NodePort: exposes the service on each Node's IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. This makes the service reachable from outside the cluster.
NodePort does not perform any load balancing. It simply directs the traffic from whichever node the client connected to. This can cause some nodes to get overwhelmed with requests while other nodes remain idle. There is no mechanism in NodePort services to distribute load evenly across multiple nodes.
ExternalName
Create a file service.yaml
with the following values:
apiVersion: v1
kind: Service
metadata:
name: my-google
namespace: default
spec:
type: ExternalName
externalName: google.com
Run
$ kubectl apply -f service.yaml
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 8d
my-release-nginx NodePort 10.43.206.241 <none> 80:30080/TCP 29m
my-google ExternalName <none> google.com <none> 77s
To test the service
$ kubectl run curl --image=curlimages/curl \
--restart=Never --rm -it -- \
curl -k https://my-google
<!DOCTYPE html>
<html lang=en>
......
ExternalName: maps the service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying or load-balancing is set up. This allows external services to be referenced.
To be continued in part 5, more about Helm.