load balancer to automate service access outside of a kubernetes cluster
Little load balancer
Ip addresses are added to the host and traffic is routed from external
client application to internal services. External IPs are updated and
added to Kubernetes Services by the loadbalancer and the local host’s
linkdevice owns the CIDR block of routable addresses on the linkdevice
subnet
--linkdevice eth0
node-role.kubernetes.io/worker
Manage routes / addresses for external ip addresses
Add or remove ip addresses from the load balancer service definition
Disallow reuse of service specified loadbalancerIPs
Build and run loadbalancer with superuser permissions so that
loadbalancer can modify routes use privileged ports.
You can run it with sudo local to the VMs or run on a kubernetes node
configured with a bridged adapter (see below) and tagged as thenode-role.kubernetes.io/load-balancer
Build and push
LINK_DEVICE=eth2 DOCKER_USER=davidwalter IMAGE=loadbalancer make build image yaml push push-tag apply
Build and run locally
make build
sudo bin/loadbalancer --kubeconfig cluster/auth/kubeconfig --linkdevice eth0
Run an echo service on port 8888
kubectl apply -f https://raw.githubusercontent.com/davidwalter0/echo/master/daemonset.yaml
Then create and modify the services like the following
# ------------------------- Service ------------------------- #
---
apiVersion: v1
kind: Service
metadata:
name: echo
labels:
app: echo
spec:
selector:
app: echo
ports:
- port: 8888
name: echo
Then update it with a definition similar to the following kubectl
apply -f service.yaml
to update that service, with LOADBALANCER running
outside the cluster the accessible port will be a Port. That
NodePort will be the upstream sink add a new external port using the
kubernetes inserted NodePort value as the destination
# ------------------------- Service ------------------------- #
---
apiVersion: v1
kind: Service
metadata:
name: echo
labels:
app: echo
spec:
selector:
app: echo
ports:
- port: 8888
name: echo
type: LoadBalancer
Now you can curl loadbalancerIP:8888
where loadbalancerIP is the
host the loadbalancer is running on.
IPs will be added when needed and ports assigned based on the
service port. IPs will be added on the specified LinkDevice (ethernet
device for external routes). A service description with an IP address
adds the ip to the LinkDevice
# ------------------------- Service ------------------------- #
---
apiVersion: v1
kind: Service
metadata:
name: echo5
labels:
app: echo
spec:
selector:
app: echo
ports:
- port: 8888
name: echo
loadBalancerIP: 192.168.0.226
type: LoadBalancer
Now you can curl loadbalancerIP:8888
where loadbalancerIP is the
host the loadbalancer is running on.
The ip management is similar to
The ip command ip addr add ip/bits dev linkdevice
ip addr add
192.168.0.226/24 dev linkdevice
, but derives the CIDR mask bits from
the existing route information on the specified link device.
The reciprocal removal uses the existing CIDR definition when there
are no more listeners on the ip.
ip addr add ip/bits dev linkdevice
List
List services and their type
printf "$(kubectl get svc --all-namespaces --output=go-template --template='{{range .items}}{{.metadata.namespace}}/{{.metadata.name}}:{{.spec.type}} LB:{{ .spec.loadBalancerIP }} ExternalIPs{{.spec.externalIPs}}\n{{end}}')"
Service addresses for load balancers
printf "$(kubectl get svc --all-namespaces --output=go-template --template='{{range .items}}{{if eq .spec.type "LoadBalancer"}}{{.metadata.namespace}}/{{.metadata.name}}:{{.spec.type}} LB:{{ .spec.loadBalancerIP }} ExternalIPs{{.spec.externalIPs}}\n{{end}}{{end}}')"
Another example enabling a routable dashboard assuming you’ve already
created the certificates for the dashboard
kubectl create secret generic kubernetes-dashboard-certs --from-file=cluster/tls --namespace=kube-system
kubectl apply -f examples/manifests/kubernetes-dashboard.yaml
kubectl apply -f examples/manifests/kubernetes-dashboard-lb.yaml
The dashboard should be visible on the loadBalancerIP and port specified in the kubernetes-dashboard-lb.yaml
From the yaml file that would be loadBalancerIP: 192.168.0.251 and
port: 443 so the application will be exposed on the port and
address 192.168.0.251:443
ports:
- port: 443
targetPort: 8443
name: kubernetes-dashboard
loadBalancerIP: 192.168.0.251
type: LoadBalancer
BUGS
TODO
Moved to complete and testing
Possible Future Work
loadbalancer/examples/manifests:
Ensure that the loadBalancerIP addresses that you use are in the
subnet of the device specified for your subnet and not reserved, or if
using a home router, outside the range the router will offer to
devices on the network
Many of the simple examples are based on the echo service
kubectl apply -f examples/manifests/echodaemonset.yaml
If you run these in a locally configured VM with a bridged interface
the dynamically allocated ip addresses are visible to the external
network while isolating network changes from the host machine in the
VM.
https://www.vagrantup.com/docs/networking/public_network.html
node-role.kubernetes.io/load-balancer="primary"
Configuring manifests and nodes for scheduling affinity / anti affinity
(bootkube … multi-mode filesystem configuration reference)
Modify calico.yaml and kube-proxy.yaml in cluster/manifests
tolerations:
# Allow the pod to run on master nodes
- key: node-role.kubernetes.io/master
effect: NoSchedule
# Allow the pod to run on loadbalancer nodes
- key: node-role.kubernetes.io/loadbalancer
effect: NoSchedule
Force scheduling load balancer only on
node-role.kubernetes.io/loadbalancer labeled node and allow scheduling
with toleration
tolerations:
- key: node-role.kubernetes.io/loadbalancer
operator: Exists
effect: NoSchedule
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/loadbalancer
operator: Exists
Taint the load balancer node to repel (give scheduling anti affinity to all but those pods with manifests)
Label the node for scheduling affinity, taint for general anti affinity
.
.
.
--node-labels=node-role.kubernetes.io/loadbalancer=primary \
--register-with-taints=node-role.kubernetes.io/loadbalancer=:NoSchedule \
.
.