项目作者: fl64

项目描述 :
Local k8s testing infra
高级语言: Jinja
项目地址: git://github.com/fl64/localk8s.git
创建时间: 2020-06-22T12:58:29Z
项目社区:https://github.com/fl64/localk8s

开源协议:

下载


k8s lab with gitops via ArgoCD

k8s cluster (1 master + X nodes )

Current config:

  • k8s ver 1.21.0
  • master - 2Gb Ram
  • worker - 3Gb Ram
  • network: cilium

Info

This lab is designed for local testing different k8s features and deployment with ArgoCD.
All you need is: Vargant, VirtualBox and some time to deploy it.

The K8s cluster in its basic configuration consists of two virtual machines: master and node, if you want more nodes you can change default_node_count var, or create a lab with the K8S_NODE_COUNT=X vagrant up command.

Once the cluster is installed and configured, Argo comes in and installs all the applications from the argo/app directory for the current repository.

localk8s

Requirements:

  • virtualbox (tested on 6.1.12)
  • vagrant (tested on 2.2.16)
  • ansible (tested on 4.2.0)
  1. ansible-galaxy collection install community.general
  2. ansible-galaxy collection install ansible.posix
  3. ansible-galaxy collection install community.kubernetes

Recommendations

  • envrc
  • k9s

Current setup

  • k8s
  • ArgoCD + ArgoCD applicationset
  • Metallb
  • Nginx ingress
  • Prometheus
  • Grafana
  • nfs-client-provisioner (storageClass: nfs-client)
  • vactor-agent

How to

  1. # up VMs and run ansible playbook
  2. vagrant up
  3. # run playbooks for specified ansible tags
  4. K8S_TAGS=common vagrant up
  5. # ssh to VMs
  6. vagrant ssh master
  7. vagrant ssh node100
  8. etc...
  9. # how to get to UI with `k port-forward`
  10. ## how to get into the Argo UI
  11. k port-forward service/argocd-server -n argocd 8080:80
  12. browse to http://localhost:8080
  13. ## how to get into the Prometheus UI
  14. k port-forward service/prometheus-server -n prometheus 9090:80
  15. browse to http://localhost:9090
  16. ## how to get into the Prometheus UI
  17. k port-forward service/grafana -n grafana 3000:80
  18. browse to http://localhost:3000
  19. # Another way with ingress and /etc/hosts
  20. ## Add local records to /etc/hosts
  21. sudo bash -c "cp /etc/hosts /etc/hosts.backup && export HOSTS_PATCH=\"$(kubectl get svc -n ingress-nginx ingress-nginx-controller -o jsonpath=\"{.status.loadBalancer.ingress[0].ip}\") grafana.k8s.local argo.k8s.local prom.k8s.local\"; grep -qF \"${HOSTS_PATCH}\" -- /etc/hosts || echo \"${HOSTS_PATCH}\" >> /etc/hosts"
  22. ## Browse
  23. https://grafana.k8s.local
  24. https://argo.k8s.local
  25. https://prom.k8s.local
  26. ## default login and pass everywhere: admin/password
  27. # clean up
  28. vagrant down

Some useful stuff

Bash completions:

  1. cat <<EOF >> ~/.bashrc
  2. source <(kubectl completion bash)
  3. source <(kubeadm completion bash)
  4. alias k=kubectl
  5. complete -F __start_kubectl k
  6. export do="--dry-run=client -oyaml"
  7. EOF
  8. source ~/.bashrc

Vim settings:

  1. cat <<EOF >> ~/.vimrc
  2. set number
  3. set et
  4. set sw=2 ts=2 sts=2
  5. EOF

Tshoot traffic issues with hubble

  1. k port-forward service/hubble-relay -n kube-system 4245:80 &
  2. hubble observe --verdict DROPPED -f

krew plugin manager

  1. kubectl krew install access-matrix
  2. kubectl krew install view-utilization
  3. kubectl krew install view-webhook
  4. kubectl krew install example

Links: