项目作者: cookeem

项目描述 :
Kubernetes high availiability deploy based on kubeadm, loadbalancer included (English/中文 for v1.15 - v1.20+)
高级语言: Smarty
项目地址: git://github.com/cookeem/kubeadm-ha.git
创建时间: 2017-06-28T06:44:56Z
项目社区:https://github.com/cookeem/kubeadm-ha

开源协议:MIT License

下载


通过kubeadm安装kubernetes高可用集群(支持docker和containerd作为kubernetes的容器运行时)

部署节点信息

hostname ip address comment
k8s-master01 192.168.0.101 kubernetes 控制平面主机 master01
k8s-master02 192.168.0.102 kubernetes 控制平面主机 master02
k8s-master03 192.168.0.103 kubernetes 控制平面主机 master03
k8s-vip 192.168.0.100 kubernetes 浮动IP,通过keepalived创建,如果使用公有云请预先申请该浮动IP
  1. # 各节点请添加主机名解释
  2. cat << EOF >> /etc/hosts
  3. 192.168.0.100 k8s-vip
  4. 192.168.0.101 k8s-master01
  5. 192.168.0.102 k8s-master02
  6. 192.168.0.103 k8s-master03
  7. EOF

架构说明

  • 演示需要,只部署3个高可用的master节点
  • 使用keepalived和nginx作为高可用的负载均衡器,通过dorycli命令行工具生成负载均衡器的配置,并通过docker-compose部署负载均衡器
  • 容器运行时使用docker,cri-socket使用cri-dockerd连接docker和kubernetes

版本信息

  1. # 操作系统版本: Debian 11
  2. $ lsb_release -a
  3. No LSB modules are available.
  4. Distributor ID: Debian
  5. Description: Debian GNU/Linux 11 (bullseye)
  6. Release: 11
  7. Codename: bullseye
  8. # docker版本: 24.0.5
  9. $ docker version
  10. Client: Docker Engine - Community
  11. Version: 24.0.5
  12. API version: 1.43
  13. Go version: go1.20.6
  14. Git commit: ced0996
  15. Built: Fri Jul 21 20:35:45 2023
  16. OS/Arch: linux/amd64
  17. Context: default
  18. Server: Docker Engine - Community
  19. Engine:
  20. Version: 24.0.5
  21. API version: 1.43 (minimum version 1.12)
  22. Go version: go1.20.6
  23. Git commit: a61e2b4
  24. Built: Fri Jul 21 20:35:45 2023
  25. OS/Arch: linux/amd64
  26. Experimental: false
  27. containerd:
  28. Version: 1.6.22
  29. GitCommit: 8165feabfdfe38c65b599c4993d227328c231fca
  30. runc:
  31. Version: 1.1.8
  32. GitCommit: v1.1.8-0-g82f18fe
  33. docker-init:
  34. Version: 0.19.0
  35. GitCommit: de40ad0
  36. # cri-dockerd版本: 0.3.4
  37. $ cri-dockerd --version
  38. cri-dockerd 0.3.4 (e88b1605)
  39. # dorycli版本: v1.7.0
  40. $ dorycli version
  41. dorycli version: v1.7.0
  42. install dory-engine version: v2.7.0
  43. install dory-console version: v2.7.0
  44. # kubeadm版本: v1.28.0
  45. $ kubeadm version
  46. kubeadm version: &version.Info{Major:"1", Minor:"28", GitVersion:"v1.28.0", GitCommit:"855e7c48de7388eb330da0f8d9d2394ee818fb8d", GitTreeState:"clean", BuildDate:"2023-08-15T10:20:15Z", GoVersion:"go1.20.7", Compiler:"gc", Platform:"linux/amd64"}
  47. # kubernetes版本: v1.28.0
  48. $ kubectl get nodes
  49. NAME STATUS ROLES AGE VERSION
  50. k8s-master01 Ready control-plane 35m v1.28.0
  51. k8s-master02 Ready control-plane 31m v1.28.0
  52. k8s-master03 Ready control-plane 30m v1.28.0

安装docker

  • 在所有节点安装docker服务
  1. # 安装基础软件
  2. apt-get update
  3. apt-get install -y sudo wget ca-certificates curl gnupg htop git jq tree
  4. # 安装docker
  5. install -m 0755 -d /etc/apt/keyrings
  6. curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
  7. chmod a+r /etc/apt/keyrings/docker.gpg
  8. echo "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
  9. "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
  10. apt-get update
  11. apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin docker-compose
  12. # 检查docker版本
  13. docker version
  14. # 设置docker参数
  15. cat << EOF > /etc/docker/daemon.json
  16. {
  17. "exec-opts": ["native.cgroupdriver=systemd"],
  18. "log-driver": "json-file",
  19. "log-opts": {
  20. "max-size": "100m"
  21. },
  22. "storage-driver": "overlay2"
  23. }
  24. EOF
  25. # 重启docker服务
  26. systemctl restart docker
  27. systemctl status docker
  28. # 验证docker服务是否正常
  29. docker images
  30. docker pull busybox
  31. docker run --rm busybox uname -m

安装kubernetes

  • 在所有节点安装kubernetes相关软件
  1. # 安装kubernetes相关组件
  2. curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
  3. cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
  4. deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
  5. EOF
  6. apt-get update
  7. apt-get install -y kubelet kubeadm kubectl
  8. kubeadm version
  9. # 获取kubernetes所需要的镜像
  10. kubeadm config images list --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers
  11. export PAUSE_IMAGE=$(kubeadm config images list --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers | grep pause)
  12. # 注意pause镜像用于配置cri-dockerd的启动参数
  13. # 应该是输出 registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9
  14. echo $PAUSE_IMAGE
  15. # 安装cri-dockerd,用于连接kubernetes和docker
  16. wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.4/cri-dockerd-0.3.4.amd64.tgz
  17. tar zxvf cri-dockerd-0.3.4.amd64.tgz
  18. cd cri-dockerd/
  19. mkdir -p /usr/local/bin
  20. install -o root -g root -m 0755 cri-dockerd /usr/local/bin/cri-dockerd
  21. # 创建cri-docker.socket启动文件
  22. cat << EOF > /etc/systemd/system/cri-docker.socket
  23. [Unit]
  24. Description=CRI Docker Socket for the API
  25. PartOf=cri-docker.service
  26. [Socket]
  27. ListenStream=%t/cri-dockerd.sock
  28. SocketMode=0660
  29. SocketUser=root
  30. SocketGroup=docker
  31. [Install]
  32. WantedBy=sockets.target
  33. EOF
  34. # 创建cri-docker.service启动文件
  35. # 注意设置pause容器镜像信息 --pod-infra-container-image=$PAUSE_IMAGE
  36. cat << EOF > /etc/systemd/system/cri-docker.service
  37. [Unit]
  38. Description=CRI Interface for Docker Application Container Engine
  39. Documentation=https://docs.mirantis.com
  40. After=network-online.target firewalld.service docker.service
  41. Wants=network-online.target
  42. Requires=cri-docker.socket
  43. [Service]
  44. Type=notify
  45. ExecStart=/usr/local/bin/cri-dockerd --container-runtime-endpoint fd:// --pod-infra-container-image=$PAUSE_IMAGE
  46. ExecReload=/bin/kill -s HUP \$MAINPID
  47. TimeoutSec=0
  48. RestartSec=2
  49. Restart=always
  50. # Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
  51. # Both the old, and new location are accepted by systemd 229 and up, so using the old location
  52. # to make them work for either version of systemd.
  53. StartLimitBurst=3
  54. # Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
  55. # Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
  56. # this option work for either version of systemd.
  57. StartLimitInterval=60s
  58. # Having non-zero Limit*s causes performance problems due to accounting overhead
  59. # in the kernel. We recommend using cgroups to do container-local accounting.
  60. LimitNOFILE=infinity
  61. LimitNPROC=infinity
  62. LimitCORE=infinity
  63. # Comment TasksMax if your systemd version does not support it.
  64. # Only systemd 226 and above support this option.
  65. TasksMax=infinity
  66. Delegate=yes
  67. KillMode=process
  68. [Install]
  69. WantedBy=multi-user.target
  70. EOF
  71. # 启动cri-dockerd
  72. systemctl daemon-reload
  73. systemctl enable --now cri-docker.socket
  74. systemctl restart cri-docker
  75. systemctl status cri-docker
  76. # 通过kubeadm预先拉取所需的容器镜像
  77. kubeadm config images pull --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers --cri-socket unix:///var/run/cri-dockerd.sock
  78. docker images
  1. # 安装dorycli
  2. cd /root
  3. wget https://github.com/dory-engine/dorycli/releases/download/v1.7.0/dorycli-v1.7.0-linux-amd64.tgz
  4. tar zxvf dorycli-v1.7.0-linux-amd64.tgz
  5. chmod a+x dorycli
  6. mv dorycli /usr/bin/
  7. # 设置dorycli的自动完成,可以通过键盘TAB键自动补全子命令和参数
  8. dorycli completion bash -h
  9. source <(dorycli completion bash)
  10. dorycli completion bash > /etc/bash_completion.d/dorycli
  11. # 使用dorycli打印高可用负载均衡器配置信息,并保存到kubeadm-ha.yaml
  12. dorycli install ha print --language zh > kubeadm-ha.yaml
  13. # 根据实际情况修改kubeadm-ha.yaml的配置信息
  14. # 可以通过以下命令获取各个主机的网卡名字
  15. ip address
  16. # 本例子的配置如下,请根据实际情况修改配置
  17. cat kubeadm-ha.yaml
  18. # 需要安装的kubernetes的版本
  19. version: "v1.28.0"
  20. # kubernetes的镜像仓库设置,如果不设置,那么使用官方的默认镜像仓库
  21. imageRepository: "registry.cn-hangzhou.aliyuncs.com/google_containers"
  22. # keepalived镜像
  23. keepalivedImage: "osixia/keepalived:release-2.1.5-dev"
  24. # nginx-lb镜像
  25. nginxlbImage: "nginx:1.27.0-alpine"
  26. # 使用keepalived创建的高可用kubernetes集群的浮动ip地址
  27. virtualIp: 192.168.0.100
  28. # 使用nginx映射的高可用kubernetes集群的apiserver映射端口
  29. virtualPort: 16443
  30. # 浮动ip地址映射的主机名,请在/etc/hosts配置文件中进行主机名映射设置
  31. virtualHostname: k8s-vip
  32. # kubernetes的容器运行时socket
  33. # docker情况下: unix:///var/run/cri-dockerd.sock
  34. # containerd情况下: unix:///var/run/containerd/containerd.sock
  35. # cri-o情况下: unix:///var/run/crio/crio.sock
  36. criSocket: unix:///var/run/cri-dockerd.sock
  37. # kubernetes集群的pod子网地址,如果不设置,使用默认的pod子网地址
  38. podSubnet: "10.244.0.0/24"
  39. # kubernetes集群的service子网地址,如果不设置,使用默认的service子网地址
  40. serviceSubnet: "10.96.0.0/16"
  41. # keepalived的鉴权密码,如果不设置那么使用随机生成的密码
  42. keepAlivedAuthPass: "input_your_password"
  43. # keepalived的virtual_router_id设置
  44. keepAlivedVirtualRouterId: 101
  45. # kubernetes的controlplane控制平面的主机配置,高可用master节点数量必须为单数并且至少3台
  46. masterHosts:
  47. # master节点的主机名,请在/etc/hosts配置文件中进行主机名映射设置
  48. - hostname: k8s-master01
  49. # master节点的IP地址
  50. ipAddress: 192.168.0.101
  51. # master节点互访使用的网卡名字,用于keepalived网卡绑定
  52. networkInterface: eth0
  53. # keepalived选举优先级,数值越大优先级越高,各个master节点的优先级不能一样
  54. keepalivedPriority: 120
  55. # master节点的主机名,请在/etc/hosts配置文件中进行主机名映射设置
  56. - hostname: k8s-master02
  57. # master节点的IP地址
  58. ipAddress: 192.168.0.102
  59. # master节点互访使用的网卡名字,用于keepalived网卡绑定
  60. networkInterface: eth0
  61. # keepalived选举优先级,数值越大优先级越高,各个master节点的优先级不能一样
  62. keepalivedPriority: 110
  63. # master节点的主机名,请在/etc/hosts配置文件中进行主机名映射设置
  64. - hostname: k8s-master03
  65. # master节点的IP地址
  66. ipAddress: 192.168.0.103
  67. # master节点互访使用的网卡名字,用于keepalived网卡绑定
  68. networkInterface: eth0
  69. # keepalived选举优先级,数值越大优先级越高,各个master节点的优先级不能一样
  70. keepalivedPriority: 100
  71. # 通过dorycli创建可用负载均衡器配置信息,并且把生成的配置输出到当前目录
  72. # 执行命名后,会输出生成的文件说明,以及启动配置文件说明
  73. dorycli install ha script -o . -f kubeadm-ha.yaml --language zh
  74. # 查看dorycli生成的kubeadm-config.yaml配置文件,该配置文件用于kubeadm init初始化kubernetes集群用途
  75. # 本例子生成的配置如下:
  76. cat kubeadm-config.yaml
  77. ---
  78. apiVersion: kubeadm.k8s.io/v1beta3
  79. kind: ClusterConfiguration
  80. kubernetesVersion: v1.28.0
  81. imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
  82. apiServer:
  83. certSANs:
  84. - "k8s-vip"
  85. - "192.168.0.100"
  86. - "k8s-master01"
  87. - "192.168.0.101"
  88. - "k8s-master02"
  89. - "192.168.0.102"
  90. - "k8s-master03"
  91. - "192.168.0.103"
  92. controlPlaneEndpoint: "192.168.0.100:16443"
  93. networking:
  94. podSubnet: "10.244.0.0/24"
  95. serviceSubnet: "10.96.0.0/16"
  96. ---
  97. apiVersion: kubeadm.k8s.io/v1beta3
  98. kind: InitConfiguration
  99. nodeRegistration:
  100. criSocket: unix:///var/run/cri-dockerd.sock
  101. # 设置master节点的kubernetes高可用负载均衡器的文件路径
  102. export LB_DIR=/data/k8s-lb
  103. # 把高可用负载均衡器的文件复制到k8s-master01
  104. ssh k8s-master01 mkdir -p ${LB_DIR}
  105. scp -r k8s-master01/nginx-lb k8s-master01/keepalived root@k8s-master01:${LB_DIR}
  106. # 在 k8s-master01 节点上启动高可用负载均衡器
  107. ssh k8s-master01 "cd ${LB_DIR}/keepalived/ && docker-compose stop && docker-compose rm -f && docker-compose up -d"
  108. ssh k8s-master01 "cd ${LB_DIR}/nginx-lb/ && docker-compose stop && docker-compose rm -f && docker-compose up -d"
  109. # 把高可用负载均衡器的文件复制到k8s-master02
  110. ssh k8s-master02 mkdir -p ${LB_DIR}
  111. scp -r k8s-master02/nginx-lb k8s-master02/keepalived root@k8s-master02:${LB_DIR}
  112. # 在 k8s-master02 节点上启动高可用负载均衡器
  113. ssh k8s-master02 "cd ${LB_DIR}/keepalived/ && docker-compose stop && docker-compose rm -f && docker-compose up -d"
  114. ssh k8s-master02 "cd ${LB_DIR}/nginx-lb/ && docker-compose stop && docker-compose rm -f && docker-compose up -d"
  115. # 把高可用负载均衡器的文件复制到k8s-master03
  116. ssh k8s-master03 mkdir -p ${LB_DIR}
  117. scp -r k8s-master03/nginx-lb k8s-master03/keepalived root@k8s-master03:${LB_DIR}
  118. # 在 k8s-master03 节点上启动高可用负载均衡器
  119. ssh k8s-master03 "cd ${LB_DIR}/keepalived/ && docker-compose stop && docker-compose rm -f && docker-compose up -d"
  120. ssh k8s-master03 "cd ${LB_DIR}/nginx-lb/ && docker-compose stop && docker-compose rm -f && docker-compose up -d"
  121. # 在各个master节点上检验浮动IP是否已经创建,正常情况下浮动IP绑定在 k8s-master01 上
  122. ip address
  • 初始化高可用kubernetes集群
  1. # 在k8s-master01上使用kubeadm-config.yaml配置文件初始化高可用集群
  2. kubeadm init --config=kubeadm-config.yaml --upload-certs
  3. # kubeadm init命令将会输出以下提示,使用该提示在其他master节点执行join操作
  4. You can now join any number of the control-plane node running the following command on each as root:
  5. kubeadm join 192.168.0.100:16443 --token tgszyf.c9dicrflqy85juaf \
  6. --discovery-token-ca-cert-hash sha256:xxx \
  7. --control-plane --certificate-key xxx
  8. Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
  9. As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
  10. "kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
  11. Then you can join any number of worker nodes by running the following on each as root:
  12. kubeadm join 192.168.0.100:16443 --token tgszyf.c9dicrflqy85juaf \
  13. --discovery-token-ca-cert-hash sha256:xxx
  14. kubeadm join 192.168.0.100:16443 --token tgszyf.c9dicrflqy85juaf \
  15. --discovery-token-ca-cert-hash sha256:xxx \
  16. --control-plane --certificate-key xxx
  17. # 在k8s-master02 和 k8s-master03节点上执行以下命令,把k8s-master02 和 k8s-master03加入到高可用kubernetes集群
  18. # 记住kubeadm join命令需要设置--cri-socket unix:///var/run/cri-dockerd.sock
  19. kubeadm join 192.168.0.100:16443 --token tgszyf.c9dicrflqy85juaf \
  20. --discovery-token-ca-cert-hash sha256:xxx \
  21. --control-plane --certificate-key xxx --cri-socket unix:///var/run/cri-dockerd.sock
  22. # 在所有master节点上设置kubectl访问kubernetes集群
  23. mkdir -p $HOME/.kube
  24. cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  25. chown $(id -u):$(id -g) $HOME/.kube/config
  26. # 在所有master节点上设置kubectl的自动完成,可以通过键盘TAB键自动补全子命令和参数
  27. kubectl completion -h
  28. kubectl completion bash > ~/.kube/completion.bash.inc
  29. printf "
  30. # Kubectl shell completion
  31. source '$HOME/.kube/completion.bash.inc'
  32. " >> $HOME/.bash_profile
  33. source $HOME/.bash_profile
  34. # 在k8s-master01节点上安装cilium网络组件
  35. wget https://github.com/cilium/cilium-cli/releases/download/v0.15.6/cilium-linux-amd64.tar.gz
  36. tar zxvf cilium-linux-amd64.tar.gz
  37. mv cilium /usr/local/bin/
  38. cilium install --version 1.14.0 --set cni.chainingMode=portmap
  39. # 设置所有master允许调度pod
  40. kubectl taint nodes --all node-role.kubernetes.io/control-plane-
  41. # 检查所有pod状态是否正常
  42. kubectl get pods -A -o wide
  43. NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  44. kube-system cilium-mwvsr 1/1 Running 0 21m 192.168.0.102 k8s-master02 <none> <none>
  45. kube-system cilium-operator-b4dfbf784-zgr7v 1/1 Running 0 21m 192.168.0.102 k8s-master02 <none> <none>
  46. kube-system cilium-v27l2 1/1 Running 0 21m 192.168.0.103 k8s-master03 <none> <none>
  47. kube-system cilium-zbcdj 1/1 Running 0 21m 192.168.0.101 k8s-master01 <none> <none>
  48. kube-system coredns-6554b8b87f-kp7tn 1/1 Running 0 30m 10.0.2.231 k8s-master03 <none> <none>
  49. kube-system coredns-6554b8b87f-zlhgx 1/1 Running 0 30m 10.0.2.197 k8s-master03 <none> <none>
  50. kube-system etcd-k8s-master01 1/1 Running 0 30m 192.168.0.101 k8s-master01 <none> <none>
  51. kube-system etcd-k8s-master02 1/1 Running 0 26m 192.168.0.102 k8s-master02 <none> <none>
  52. kube-system etcd-k8s-master03 1/1 Running 0 25m 192.168.0.103 k8s-master03 <none> <none>
  53. kube-system kube-apiserver-k8s-master01 1/1 Running 0 30m 192.168.0.101 k8s-master01 <none> <none>
  54. kube-system kube-apiserver-k8s-master02 1/1 Running 0 26m 192.168.0.102 k8s-master02 <none> <none>
  55. kube-system kube-apiserver-k8s-master03 1/1 Running 1 (25m ago) 25m 192.168.0.103 k8s-master03 <none> <none>
  56. kube-system kube-controller-manager-k8s-master01 1/1 Running 1 (26m ago) 30m 192.168.0.101 k8s-master01 <none> <none>
  57. kube-system kube-controller-manager-k8s-master02 1/1 Running 0 26m 192.168.0.102 k8s-master02 <none> <none>
  58. kube-system kube-controller-manager-k8s-master03 1/1 Running 0 24m 192.168.0.103 k8s-master03 <none> <none>
  59. kube-system kube-proxy-gr2pt 1/1 Running 0 26m 192.168.0.102 k8s-master02 <none> <none>
  60. kube-system kube-proxy-rkb9b 1/1 Running 0 30m 192.168.0.101 k8s-master01 <none> <none>
  61. kube-system kube-proxy-rvmv4 1/1 Running 0 25m 192.168.0.103 k8s-master03 <none> <none>
  62. kube-system kube-scheduler-k8s-master01 1/1 Running 1 (26m ago) 30m 192.168.0.101 k8s-master01 <none> <none>
  63. kube-system kube-scheduler-k8s-master02 1/1 Running 0 26m 192.168.0.102 k8s-master02 <none> <none>
  64. kube-system kube-scheduler-k8s-master03 1/1 Running 0 23m 192.168.0.103 k8s-master03 <none> <none>
  65. # 检查所有节点状态是否正常
  66. kubectl get nodes
  67. NAME STATUS ROLES AGE VERSION
  68. k8s-master01 Ready control-plane 31m v1.28.0
  69. k8s-master02 Ready control-plane 27m v1.28.0
  70. k8s-master03 Ready control-plane 26m v1.28.0
  71. # 测试部署应用到kubernetes集群
  72. # 部署一个nginx应用,并暴露到nodePort31000
  73. kubectl run nginx --image=nginx:1.23.1-alpine --image-pull-policy=IfNotPresent --port=80 -l=app=nginx
  74. kubectl create service nodeport nginx --tcp=80:80 --node-port=31000
  75. curl k8s-vip:31000

[可选] 安装管理界面 kubernetes-dashboard

调整kubernetes-dashboard服务使用nodePort暴露端口

cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:

  • port: 443
    protocol: TCP
    targetPort: 8443
    nodePort: 30000
    selector:
    k8s-app: kubernetes-dashboard
    type: NodePort
    EOF

创建管理员serviceaccount

kubectl create serviceaccount -n kube-system admin-user —dry-run=client -o yaml | kubectl apply -f -

创建管理员clusterrolebinding

kubectl create clusterrolebinding admin-user —clusterrole=cluster-admin —serviceaccount=kube-system:admin-user —dry-run=client -o yaml | kubectl apply -f -

手动创建serviceaccount的secret

cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: admin-user-secret
namespace: kube-system
annotations:
kubernetes.io/service-account.name: admin-user
type: kubernetes.io/service-account-token
EOF

获取kubernetes管理token

kubectl -n kube-system get secret admin-user-secret -o jsonpath=’{ .data.token }’ | base64 -d

使用浏览器访问kubernetes-dashboard: https://k8s-vip:30000

使用kubernetes管理token登录kubernetes-dashboard

  1. ## [可选] 安装ingress控制器 traefik
  2. - 要使用kubernetes的[ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/)功能,必须安装ingress controller,推荐使用`traefik`
  3. - 要了解更多,请阅读官方网站文档: [traefik](https://doc.traefik.io/traefik/)
  4. - kubernetes所有master节点部署traefik:
  5. ```shell script
  6. # 拉取 traefik helm repo
  7. helm repo add traefik https://traefik.github.io/charts
  8. helm fetch traefik/traefik --untar
  9. # 以daemonset方式部署traefik
  10. cat << EOF > traefik.yaml
  11. deployment:
  12. kind: DaemonSet
  13. image:
  14. name: traefik
  15. tag: v2.10.5
  16. ports:
  17. web:
  18. hostPort: 80
  19. websecure:
  20. hostPort: 443
  21. service:
  22. type: ClusterIP
  23. EOF
  24. # 安装traefik
  25. kubectl create namespace traefik --dry-run=client -o yaml | kubectl apply -f -
  26. helm install -n traefik traefik traefik/ -f traefik.yaml
  27. # 检查安装情况
  28. helm -n traefik list
  29. kubectl -n traefik get pods -o wide
  30. kubectl -n traefik get services -o wide
  31. # 检验traefik安装是否成功,如果输出 404 page not found 表示成功
  32. curl k8s-vip
  33. curl -k https://k8s-vip

[可选] 安装性能数据采集工具 metrics-server

```shell script

拉取镜像

docker pull registry.aliyuncs.com/google_containers/metrics-server:v0.6.1
docker tag registry.aliyuncs.com/google_containers/metrics-server:v0.6.1 k8s.gcr.io/metrics-server/metrics-server:v0.6.1

获取metrics-server安装yaml

curl -O -L https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.1/components.yaml

添加—kubelet-insecure-tls参数

sed -i ‘s/- args:/- args:\n - —kubelet-insecure-tls/g’ components.yaml

安装metrics-server

kubectl apply -f components.yaml

等待metrics-server正常

kubectl -n kube-system get pods -l=k8s-app=metrics-server

查看节点的metrics

kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8s-master01 146m 7% 2284Mi 59%
k8s-master02 123m 6% 2283Mi 59%
k8s-master03 114m 5% 2180Mi 57%

  1. - 安装metrics-serverkubernetes-dashboard也可以显示性能数据
  2. ![](images/kubernetes-dashboard.png)
  3. ## [可选] 安装服务网格 istio
  4. - 要使用服务网格的混合灰度发布能力,需要部署istio服务网格
  5. - 要了解更多,请阅读istio官网文档: [istio.io](https://istio.io/latest/docs/)
  6. ```shell script
  7. # 安装istioctl,客户端下载地址 https://github.com/istio/istio/releases/tag/1.18.2
  8. # 下载并安装istioctl
  9. wget https://github.com/istio/istio/releases/download/1.18.2/istioctl-1.18.2-linux-amd64.tar.gz
  10. tar zxvf istioctl-1.18.2-linux-amd64.tar.gz
  11. mv istioctl /usr/bin/
  12. # 确认istioctl版本
  13. istioctl version
  14. # 使用istioctl部署istio到kubernetes
  15. istioctl install --set profile=demo \
  16. --set values.gateways.istio-ingressgateway.type=ClusterIP \
  17. --set values.global.imagePullPolicy=IfNotPresent \
  18. --set values.global.proxy_init.resources.limits.cpu=100m \
  19. --set values.global.proxy_init.resources.limits.memory=100Mi \
  20. --set values.global.proxy.resources.limits.cpu=100m \
  21. --set values.global.proxy.resources.limits.memory=100Mi
  22. # 检查istio部署情况
  23. kubectl -n istio-system get pods,svc

[可选] 非常简单的开源k8s远程开发环境 Dory-Engine

🚀🚀🚀 使用k8s快速搭建远程开发环境 (https://www.bilibili.com/video/BV1Zw4m1r7aw/)

  • Dory-Engine 非常简单的开源k8s远程开发环境,开发人员不用学、不用写、不用配就可以自行把自己编写的程序从源代码,编译、打包、部署到各类k8s环境中。
  1. 不用学: 不用学习复杂的k8s技术原理,5分钟即可快速上手部署应用
  2. 不用配: 不需要配置任何代码仓库、镜像仓库和k8s连接参数
  3. 不用写: 不需要编写任何k8s部署清单和流水线脚本