项目作者: davidwalter0

项目描述 :
load balancer to automate service access outside of a kubernetes cluster
高级语言: Go
项目地址: git://github.com/davidwalter0/loadbalancer.git
创建时间: 2018-01-03T09:52:02Z
项目社区:https://github.com/davidwalter0/loadbalancer

开源协议:Apache License 2.0

下载


Little load balancer


Load Balancing

Ip addresses are added to the host and traffic is routed from external
client application to internal services. External IPs are updated and
added to Kubernetes Services by the loadbalancer and the local host’s
linkdevice owns the CIDR block of routable addresses on the linkdevice
subnet

alt text


Data Flow

External (Out of Cluster) LoadBalancer

alt text

In Cluster LoadBalancer

alt text

Flow / Sequence Description

  • Connect to kubernetes cluster
  • Watch services
    • when the Type=LoadBalancer
      • load endpoints for the service name/namespace
      • create a forward service listening on loadbalancer IP + port
      • accept new connections
      • create a “pipe” bidirectional copy to endpoint|from source
    • when the key is deleted or the type is changed from loadbalancer
      delete the forward service
    • when loadBalancerIP is set e.g. if the ip hasn’t been set it will
      be added to the ethernet device specified as the LinkDevice
      e.g. --linkdevice eth0
  • Watch nodes
    • add or remove nodes from events in the queue
    • use nodes with the label node-role.kubernetes.io/worker
    • during node creation or with the label command add
      —node-labels=node-role.kubernetes.io/worker
    • use the ExternalID from the node spec as the IP endpoint

Manage routes / addresses for external ip addresses

  • Add or remove ip addresses from the load balancer service definition

    • add if not present
    • maintain a map of addresses
    • remove when the last load balancer using the address is removed
  • Disallow reuse of service specified loadbalancerIPs

    • when adding an ip address, it will be used only by one load
      balancers service
    • Each service must choose a unique address
    • if using the default address multiple services may share the
      default linkdevice address but port collision management is up to
      the author of the service specification

Example use

Build and run loadbalancer with superuser permissions so that
loadbalancer can modify routes use privileged ports.

You can run it with sudo local to the VMs or run on a kubernetes node
configured with a bridged adapter (see below) and tagged as the
node-role.kubernetes.io/load-balancer

Build and push

  1. LINK_DEVICE=eth2 DOCKER_USER=davidwalter IMAGE=loadbalancer make build image yaml push push-tag apply

Build and run locally

  1. make build
  2. sudo bin/loadbalancer --kubeconfig cluster/auth/kubeconfig --linkdevice eth0

Run an echo service on port 8888

  1. kubectl apply -f https://raw.githubusercontent.com/davidwalter0/echo/master/daemonset.yaml

Then create and modify the services like the following

  1. # ------------------------- Service ------------------------- #
  2. ---
  3. apiVersion: v1
  4. kind: Service
  5. metadata:
  6. name: echo
  7. labels:
  8. app: echo
  9. spec:
  10. selector:
  11. app: echo
  12. ports:
  13. - port: 8888
  14. name: echo

Then update it with a definition similar to the following kubectl apply -f service.yaml to update that service, with LOADBALANCER running
outside the cluster the accessible port will be a Port. That
NodePort will be the upstream sink add a new external port using the
kubernetes inserted NodePort value as the destination

  1. # ------------------------- Service ------------------------- #
  2. ---
  3. apiVersion: v1
  4. kind: Service
  5. metadata:
  6. name: echo
  7. labels:
  8. app: echo
  9. spec:
  10. selector:
  11. app: echo
  12. ports:
  13. - port: 8888
  14. name: echo
  15. type: LoadBalancer

Now you can curl loadbalancerIP:8888 where loadbalancerIP is the
host the loadbalancer is running on.


IPs will be added when needed and ports assigned based on the
service port. IPs will be added on the specified LinkDevice (ethernet
device for external routes). A service description with an IP address
adds the ip to the LinkDevice

  1. # ------------------------- Service ------------------------- #
  2. ---
  3. apiVersion: v1
  4. kind: Service
  5. metadata:
  6. name: echo5
  7. labels:
  8. app: echo
  9. spec:
  10. selector:
  11. app: echo
  12. ports:
  13. - port: 8888
  14. name: echo
  15. loadBalancerIP: 192.168.0.226
  16. type: LoadBalancer

Now you can curl loadbalancerIP:8888 where loadbalancerIP is the
host the loadbalancer is running on.

The ip management is similar to

The ip command ip addr add ip/bits dev linkdevice ip addr add 192.168.0.226/24 dev linkdevice, but derives the CIDR mask bits from
the existing route information on the specified link device.

The reciprocal removal uses the existing CIDR definition when there
are no more listeners on the ip.

ip addr add ip/bits dev linkdevice


List

List services and their type

  1. printf "$(kubectl get svc --all-namespaces --output=go-template --template='{{range .items}}{{.metadata.namespace}}/{{.metadata.name}}:{{.spec.type}} LB:{{ .spec.loadBalancerIP }} ExternalIPs{{.spec.externalIPs}}\n{{end}}')"

Service addresses for load balancers

  1. printf "$(kubectl get svc --all-namespaces --output=go-template --template='{{range .items}}{{if eq .spec.type "LoadBalancer"}}{{.metadata.namespace}}/{{.metadata.name}}:{{.spec.type}} LB:{{ .spec.loadBalancerIP }} ExternalIPs{{.spec.externalIPs}}\n{{end}}{{end}}')"

Dashboard

Another example enabling a routable dashboard assuming you’ve already
created the certificates for the dashboard

  1. kubectl create secret generic kubernetes-dashboard-certs --from-file=cluster/tls --namespace=kube-system
  2. kubectl apply -f examples/manifests/kubernetes-dashboard.yaml
  3. kubectl apply -f examples/manifests/kubernetes-dashboard-lb.yaml

The dashboard should be visible on the loadBalancerIP and port specified in the kubernetes-dashboard-lb.yaml

From the yaml file that would be loadBalancerIP: 192.168.0.251 and
port: 443 so the application will be exposed on the port and
address 192.168.0.251:443

  1. ports:
  2. - port: 443
  3. targetPort: 8443
  4. name: kubernetes-dashboard
  5. loadBalancerIP: 192.168.0.251
  6. type: LoadBalancer

BUGS

  • Unique IP assignment fails
    • When 2 services attempt to use the same address log the second
      will fail with an error then ignore the service.

TODO

Features / behaviour

Moved to complete and testing

  • Load active devices (use —linkdevice to specify the active
    device)
  • Load active primary ip address per device
    • must specify the device on the command line —linkdevice
  • set default ip address per device
  • Check for new load balancer request’s ip match to a device
    default subnet and add if not found
  • Catch/recover from errors associated with missing IP, illegal
    IP/CIDR, address in use and report accordingly
    • check valid ip address ignore if invalid
  • Get endpoint node list by service
    • marry nodes to nodeports as service endpoints for out of cluster
  • Create endpoint watcher similar to service watch
    • out of cluster use node watcher
  • All namespaces through one load balancer
  • Update service ExternalIPs with the ip address of the load balancer
  • Add signal handler to cleanup ExternalIPs on shutown sigint,
    sigterm
  • Run in a managed Kubernetes managed deployment pod inside cluster
  • IP address endpoint assignment by collecting node names from
    kubernetes cluster
    • Complete
  • Test InCluster endpoint activity
    • In progress

Possible Future Work

  • research netlink network route/device watcher for both insertion
    of physical hardware or default address change
  • allow multiple ports per service to be forwarded

loadbalancer/examples/manifests:

Ensure that the loadBalancerIP addresses that you use are in the
subnet of the device specified for your subnet and not reserved, or if
using a home router, outside the range the router will offer to
devices on the network

Many of the simple examples are based on the echo service

  1. kubectl apply -f examples/manifests/echodaemonset.yaml
  • kubernetes-dashboard-lb.yaml
  • kubernetes-dashboard.yaml
  • service-lb-new-addr.yaml
    • load balancer with a specified address loadBalancerIP=
  • service-lb.yaml
    • load balancer without a specified address
  • service.yaml

If you run these in a locally configured VM with a bridged interface
the dynamically allocated ip addresses are visible to the external
network while isolating network changes from the host machine in the
VM.

  • Running in a managed Kubernetes deployment pod inside the cluster
    • Manage ip addresses on linkdevice
    • Add address to and remove address from the linkdevice and use the
      address specified in the service’s loadBalancerIP field as the
      service’s externalIP
    • Example files: enable cluster role and configure deployment
      • kubectl -f examples/manifests/loadbalancerdeployment.yaml -f examples/manifests/loadbalancerclusterrole.yaml
      • loadbalancerdeployment.yaml
      • loadbalancerclusterrole.yaml
    • Run inside a manually configured bridge in virtualbox or a
      bridged interface with vagrant
      • https://www.vagrantup.com/docs/networking/public_network.html
      • in Vagrant you can select the interface to use as the bridge and
        add the bridge when provisioning the VM
        • config.vm.network :public_network, :public_network => “wlan0”
        • config.vm.network :public_network, :public_network => “eth0”
        • answer the prompt with the bridge interface number
    • Run in cluster with host network privilege inside a kubernetes
      managed pod and a bridge interface specified as —linkdevice
      • label the node node-role.kubernetes.io/load-balancer="primary"
      • run a deployment or a replication set with a replica count of one
        replicas: 1
      • use the bridge interface device to apply the changes
      • configure permissions if the cluster has enabled
      • loadbalancer configures ips on the bridged interface supplied on the
        commandline

Configuring manifests and nodes for scheduling affinity / anti affinity

(bootkube … multi-mode filesystem configuration reference)

Modify calico.yaml and kube-proxy.yaml in cluster/manifests

  1. tolerations:
  2. # Allow the pod to run on master nodes
  3. - key: node-role.kubernetes.io/master
  4. effect: NoSchedule
  5. # Allow the pod to run on loadbalancer nodes
  6. - key: node-role.kubernetes.io/loadbalancer
  7. effect: NoSchedule

Force scheduling load balancer only on
node-role.kubernetes.io/loadbalancer labeled node and allow scheduling
with toleration

  1. tolerations:
  2. - key: node-role.kubernetes.io/loadbalancer
  3. operator: Exists
  4. effect: NoSchedule
  5. affinity:
  6. nodeAffinity:
  7. requiredDuringSchedulingIgnoredDuringExecution:
  8. nodeSelectorTerms:
  9. - matchExpressions:
  10. - key: node-role.kubernetes.io/loadbalancer
  11. operator: Exists

Taint the load balancer node to repel (give scheduling anti affinity to all but those pods with manifests)

Label the node for scheduling affinity, taint for general anti affinity

  1. .
  2. .
  3. .
  4. --node-labels=node-role.kubernetes.io/loadbalancer=primary \
  5. --register-with-taints=node-role.kubernetes.io/loadbalancer=:NoSchedule \
  6. .
  7. .