项目作者: warm-metal

项目描述 :
A kubectl plugin to support development activities in k8s clusters
高级语言: Go
项目地址: git://github.com/warm-metal/kubectl-dev.git
创建时间: 2021-01-01T12:11:44Z
项目社区:https://github.com/warm-metal/kubectl-dev

开源协议:Apache License 2.0

下载


kubectl-dev

kubectl-dev is a kubectl plugin to support image building, workload debugging,
and CliApp especially in a single-node minikube cluster which is used to replace Docker Desktop.

Currently, the plugin can only work on containerd.

Features

Build image or binary

Docker build conformance

The kubectl-dev build command is fully compatible to the docker build command.
no-cache, build-arg and target are also supported.

```shell script

Build image foo:bar using Dockerfile in the current directory.

kubectl dev build -t foo:bar

Build image foo:bar using foobar.dockerfile as the Dockerfile in diretory ~/image.

kubectl dev build -t foo:bar -f foobar.dockerfile ~/image

  1. The builder also support pushing private images and insecure registries.
  2. ```shell script
  3. # Log into a image registry as using "docker login"
  4. kubectl dev login
  5. # Build image and push it to an insecure registry.
  6. kubectl dev build -t foo:bar --push --insecure

Build artifacts

The build command also can copy artifacts from a complicated context to a local directory.

```shell script

Build the stage mac-cli and copy generated to the local directory _output.

kubectl dev build -f hack/dev/Dockerfile —local _output/ —target mac-cli

  1. #### Auto-generate image name for testing
  2. Build command will automatically generate image name if no `-t(--tag)` or `--local` provided.
  3. The default name is in the format of `build.local/x/%s:v%d`.
  4. The %s is going to be replaced by the build context directory.
  5. The %d will be replaced by an integer incremented by 1 in each build.
  6. Users can change the default pattern by setting `--tag-pattern`.
  7. #### Apply k8s manifests after build
  8. You can specify a k8s manifest file through `--manifest` to apply after build.
  9. Its image configuration will be updated with the built image name.
  10. #### Remember arguments for replaying
  11. Once you've built an image in some directory, all command line arguments are saved.
  12. You can build the same image in the same directory with just `kubectl dev build` command.
  13. ### Debug workloads
  14. If an app failed, it would crash, wait for deps and has no responding, fails on some libraries,
  15. say some .so files, or get wrong mounted ConfigMaps or Secrets.
  16. K8s provides nothing to figure them out. The only thing may help is logs your app printed.
  17. The `debug` command provides a new way to start the workload. It creates an identical Pod in the same namespace,
  18. except the image of the target container. `debug` opens a bash session after the Pod started.
  19. The target image is mount to `/app-root`.
  20. You can check the original image context or debug the binary in the opened session.
  21. Deployment, StatefulSet, DaemonSet, ReplicaSet, Job, CronJob, and Pod are all supported.
  22. ```shell script
  23. # Debug a running or failed workload. Run the same command again could open a new session to the same debugger.
  24. kubectl dev debug -n cliapp-system deploy buildkitd
  25. # The debugger Pod would not fork environment variables from the original workload.
  26. kubectl dev debug -n cliapp-system deploy buildkitd --with-original-envs=false
  27. # Specify container name if more than one containers in the Pod. Or, an error would arise.
  28. kubectl dev debug -n cliapp-system deploy buildkitd -c buildkitd
  29. # Pass the local HTTP_PROXY to the debugger Pod.
  30. kubectl dev debug -n cliapp-system deploy buildkitd --use-proxy
  31. # Debug a Pod with a new versioned image.
  32. kubectl dev debug pod foo --image bar:new-version
  33. #Debug an image.
  34. kubectl dev debug --image foo:latest

The default distro of debugger is alpine. ubuntu would be another option.
You can also choose one of bash or zsh as your favorite in debuggers via option --shell.
```shell script
kubectl dev debug -n cliapp-system deploy buildkitd —with-original-envs=false —shell zsh —distro ubuntu

  1. ### Use CliApp
  2. CliApp provides the capability of running cli commands, which are installed in the cluster, from a local terminal.
  3. Besides installing a CliApp object in the cluster, a shortcut w/ the same is created in the directory **~/.cliapps/**
  4. and is linked to **/usr/local/bin/**.
  5. ```shell script
  6. # Install cliapp crictl via image docker.io/warmmetal/app-crictl:v0.1.0.
  7. # The last argument "crictl" shows that command crictl will be executed in the Pod once the app is executed.
  8. # If omitted, the command same with the app name is started instead.
  9. sudo -E kubectl dev app install --name crictl \
  10. --image docker.io/warmmetal/app-crictl:v0.1.0 \
  11. --env CONTAINER_RUNTIME_ENDPOINT=unix:///var/run/containerd/containerd.sock \
  12. --hostpath /var/run/containerd/containerd.sock --use-proxy \
  13. crictl
  14. # ❯ command -v crictl
  15. # /usr/local/bin/crictl
  16. # ❯ ls -l /usr/local/bin/crictl
  17. # lrwxr-xr-x 1 root wheel 25 Mar 14 18:57 /usr/local/bin/crictl -> /Users/kh/.cliapps/crictl

You can install a CliApp via a Dockerfile and the builtin buildkit will help build the necessary image.
```shell script
sudo -E kubectl dev app install —name ctr \
—dockerfile https://raw.githubusercontent.com/warm-metal/cliapps/master/ctr/Dockerfile \
—env CONTAINERD_NAMESPACE=k8s.io \
—hostpath /var/run/containerd/containerd.sock —use-proxy

  1. ## Installation
  2. ### From Homebrew
  3. The Homebrew formulae is available for MacOS.
  4. ```shell script
  5. brew install warm-metal/rc/kubectl-dev

From the pre-built binary

You can also download the pre-built binary.

```shell script

For MacOS, the administrator privilege is required to save kubectl-dev to /usr/local/bin. Run

sudo sh -c ‘curl -skL https://github.com/warm-metal/kubectl-dev/releases/download/v0.4.0/kubectl-dev.darwin-amd64.tpxz | tar -C /usr/local/bin/ -xpf -‘

For Linux, run

sudo sh -c ‘curl -skL https://github.com/warm-metal/kubectl-dev/releases/download/v0.4.0/kubectl-dev.linux-amd64.tpxz | tar -C /usr/local/bin/ -xpf -‘

  1. ## Initialization
  2. After installed, run one of the commands below to install deps.
  3. ```shell script
  4. # For minikube clusters
  5. kubectl dev prepare --minikube
  6. # Inherit current HTTP_PROXY in the buildkit workspace.
  7. # If you are in mainland China, this flag could accelerate the speed of image and dependency pulling while building.
  8. kubectl dev prepare --minikube --use-proxy
  9. # Install cliapp and set the environment variable to buildkit.
  10. kubectl dev prepare --builder-env GOPROXY='https://goproxy.cn|https://goproxy.io|direct'
  11. # For containerd
  12. kubectl dev prepare

Build from Source

```shell script

For MacOS, run

kubectl dev build -f hack/dev/Dockerfile —local _output/ —target mac-cli

For Linux, run

kubectl dev build -f hack/dev/Dockerfile —local _output/ —target linux-cli

  1. Or, use Makefile instead.
  2. ## Prepare a minikube cluster for program development
  3. `kubectl-dev` offers you some capabilities to build images and debug them in k8s clusters directly.
  4. You don't need to install many runtime and many more versions of them in your laptop.
  5. No runtime changing and management. Also, no out-of-date dep garbage. All these are replaced by a k8s cluster.
  6. To own a local minikube cluster on your laptop is not easy as running `minikube start`. It could be a little tricky.
  7. ### Create (start the first time) a cluster w/ containerd
  8. ```shell script
  9. mini_create() {
  10. PROFILE=minikube
  11. if [[ $# -gt 0 ]]; then
  12. PROFILE=$1
  13. fi
  14. minikube start -p $PROFILE \
  15. --service-cluster-ip-range="10.24.0.0/16" \
  16. --container-runtime=containerd \
  17. --memory=8g \
  18. --cpus=4 \
  19. --disk-size=100g
  20. }

Start an existed cluster

After the cluster created, we must disable both --preload and --cache-images. Or,
with —preload enabled, the containerd content store will be override by a pre-downloaded tarball.
If —cache-images enabled, minikube always try to save images to local tarballs.

```shell script
mini_start() {
PROFILE=minikube
if [[ $# -gt 0 ]]; then
PROFILE=$1
fi

minikube start -p $PROFILE —preload=false —cache-images=false
}
```

Time Sync

With hyberkit, the guest can’t sync its datetime to local host. The easiest way to keep time sync is using NTP.
But, if you are in a poor network or behind a powerful firewall, the default NTP settings is useless.

To fix it, you can try another wheel we built, kube-systemd.