项目作者: William-Yeh

项目描述 :
gRPC Load Balancing Demo
高级语言: Go
项目地址: git://github.com/William-Yeh/grpc-lb.git
创建时间: 2020-03-30T02:35:28Z
项目社区:https://github.com/William-Yeh/grpc-lb

开源协议:

下载


gRPC Load Balancing Demo

A simple demo of gRPC load balancing.

Environment requirements

The following software is required to run the demo:

Demo in native mode

Build

Build the native binaries:

  1. ./build.sh

The following binaries will be generated in the out directory:

  • server: sends its IP address back to HTTP and gRPC clients.

  • client-http: connects to server via HTTP.

  • client-grpc: connects to server via gRPC.

Run

Arrange your terminal as follows for better experience:

Fig 1. Native clients and native server

Pane A
Start the server:

  1. out/server

The server provides the same addr service via 2 TCP ports:

  • Port 80 for http
  • Port 30051 for gRPC

Pane B
Connect to server’s addr service via http endpoint:

  1. out/client-http http://127.0.0.1:80/addr

You can see the IP address of the connected server instance.

Pane C
Connect to server’s addr service via gRPC endpoint:

  1. out/client-grpc localhost:30051

You can see the IP address of the connected server instance.

Demo in Kubernetes

Preparation

Create grpc-lb namespace for this demo:

  1. kubectl create ns grpc-lb

Build images:

  1. skaffold build

Run without gRPC load balancing

Run server (only one pod) in grpc-lb namespace:

  1. skaffold dev -n grpc-lb

The server provides the same addr service via 2 TCP ports:

  • Port 30080 for http
  • Port 30051 for gRPC

Run native client-http:

  1. out/client-http http://127.0.0.1:30080/addr

Run native client-grpc:

  1. out/client-grpc localhost:30051

You can see the IP address of the single server pod is 10.1.0.8:
Fig 2. Native clients, and one server pod in K8s

Now, scale the server to 5 pods:

  1. kubectl scale -n grpc-lb --replicas=5 deployment/addr-server

You can see that only one of the gRPC server pods is being connected to and serving the same client-grpc instance:

Fig 3. Native clients, and 5 server pods in K8s

Demo in Kubernetes + service mesh

Enable Linkerd

Install Linkerd 2 into active Kubernetes cluster:

  1. # Install
  2. linkerd install | kubectl apply -f -
  3. # Check
  4. linkerd check

Delete the old namespace grpc-lb:

  1. kubectl delete ns grpc-lb

Create a new namespace grpc-lb with Linkerd injected:

  1. kubectl apply -f ns.yml

Check data plane again, if necessary:

  1. linkerd -n grpc-lb check --proxy

Run with gRPC load balancing

Uncomment the following line in skaffold.yaml:

  1. - client-grpc.yml

Run server and k8s version of client-grpc in grpc-lb namespace:

  1. skaffold dev -n grpc-lb

Run native client-grpc, as a comparison:

  1. out/client-grpc localhost:30051

Make sure you have run these commands as follows:
Fig 4. Native client. One mesh client and one server pod in K8s

Wait for the whole clients and server becoming stable.

Now, scale the server to 5 pods:

  1. kubectl scale -n grpc-lb --replicas=5 deployment/addr-server

You can see that all gRPC server pods (10.1.0.71 — 10.1.0.75) take turns in serving the same client instance within the same grpc-lb namespace.
Fig 5. Native client. One mesh client and 5 server pods in K8s

View Linkerd dashboard:

  1. linkerd dashboard &

You can see the topology of native client-grpc, meshed client-grpc and meshed server:
Fig 6. Topology

You can also see that all gRPC server pods take turns in serving the same client instance within the same grpc-lb namespace.
Fig 7. gRPC load balancing effects