项目作者: idev4u

项目描述 :
Concoures CI Kube Deploment
高级语言:
项目地址: git://github.com/idev4u/concourse-ci-kube.git
创建时间: 2017-09-21T18:14:25Z
项目社区:https://github.com/idev4u/concourse-ci-kube

开源协议:

下载


The missing kubernetes deployment for Concourse-Ci

Goal

This project was build to bring Concourse-Ci to IBM Cloud Container Service, which is based on kubernetes.
If someone else has the need to deploy Concourse-Ci to kubernetes he/she should also benefit from this project.

Pre Requirements

The only Requirements to follow all this steps is to have running kubernetes cluster on IBM Bluemix Containers.
How to do this, is described on eu-de">https://console.bluemix.net/containers-kubernetes/launch?env_id=ibm:yp:eu-de
The second requirement is that you have basic knowledge about kubernetes.

Install

  1. Grab the project

    1. git clone git@github.com:idev4u/concourse-ci-kube.git
    2. cd concourse-ci-kube
  2. As described in the Concourse Documentation generate the keys for TSA and ATC

    1. mkdir -p keys/web keys/worker
    2. ssh-keygen -t rsa -f ./keys/web/tsa_host_key -N ''
    3. ssh-keygen -t rsa -f ./keys/web/session_signing_key -N ''
    4. ssh-keygen -t rsa -f ./keys/worker/worker_key -N ''
    5. cp ./keys/worker/worker_key.pub ./keys/web/authorized_worker_keys
    6. cp ./keys/web/tsa_host_key.pub ./keys/worker
  3. Generate the secrets volume for your deployments

    This generate the secret volume for the web deployment

    1. $ kubectl create secret generic concourse-web-keys \
    2. --from-file=./keys/web/authorized_worker_keys \
    3. --from-file=./keys/web/session_signing_key \
    4. --from-file=./keys/web/session_signing_key.pub \
    5. --from-file=./keys/web/tsa_host_key \
    6. --from-file=./keys/web/tsa_host_key.pub

    If you would verify the generated secrets volume, use this command.

    1. kubectl get secret concourse-web-keys -o yaml

    Here is command that generate the secret volume for the worker deployment

    1. $ kubectl create secret generic concourse-worker-keys \
    2. --from-file=./keys/worker/tsa_host_key.pub \
    3. --from-file=./keys/worker/worker_key \
    4. --from-file=./keys/worker/worker_key.pub

    And here as well the verify command.

    1. kubectl get secret concourse-worker-keys -o yaml
  4. Deploy the all the 3 components

    This command will deploy the database for your Concourse-Ci

    1. kubectl apply -f concourse-db-deployment.yaml && kubectl apply -f concourse-db-service.yaml
    1. deployment "concourse-db" created
    2. service "concourse-db" created

    Before you deploy the Web UI, change the value of the external IP in the concourse-web-deployment.yaml file

    1. - name: CONCOURSE_EXTERNAL_URL
    2. value: ${kubernetes_node_public_ip}

    This command will deploy the WebUI of your Concourse-Ci

    1. kubectl apply -f concourse-web-deployment.yaml && kubectl apply -f concourse-web-service.yaml
    1. deployment "concourse-web" created
    2. service "concourse-web" created

    This command will deploy one worker for your Concourse-Ci

    1. bash$ kubectl apply -f concourse-worker-deployment.yaml && kubectl apply -f concourse-worker-service.yaml
    1. deployment "concourse-worker" created
    2. service "concourse-worker" created

Concourse Pipeline

After the Concourse-Ci deployment is succesfully done, you can login the frist time into Concourse-Ci. Open the url http://${kubernetes_node_public_ip}:32080 in your favorite browser and login with user concourse and the password changeme. If you have changed this values in the deployment manifests, use yours. If this works and you have download the tool fly you can push your first pipeline.

  1. fly -t kube login -c ${kubernetes_node_public_ip}
  2. fly -t kube set-pipeline -p kube-pipe -c pipeline/pipeline.yml
  3. fly -t kube expose-pipeline -p kube-pipe

Some useful command to be succesful

public IP of your node

How did you find the public IP of your kubernetes node on the Bluemix Container platform? This is the command for getting this information.

  1. bx cs workers concourse-ci
  1. ID Public IP Private IP Machine Type State Status
  2. kube-par01-pa747d8ee7d506411aba3f992fc3d3c7a1-w1 x.x.x.x 10.x.x.x free normal Ready

kube context

If you are not sure, that you aim on the correct context, this command helps you.

  1. kubectl config current-context
  1. concourse-ci

This command provides an overview of your pods you should have 3!

  1. kubectl get pods -o wide
  1. NAME READY STATUS RESTARTS AGE IP NODE
  2. concourse-db-59888000-1fcv0 1/1 Running 0 6h 172.x.x.x 10.x.x.a
  3. concourse-web-2821356835-npkvb 1/1 Running 0 5h 172.x.x.x 10.x.x.a
  4. concourse-worker-1074565060-nkrm9 1/1 Running 0 13m 172.x.x.x 10.x.x.a

This command provides an overview of your service you should also have 3!

  1. bash$ kubectl get svc -o wide
  1. NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
  2. concourse-db None <none> 55555/TCP 28d service=concourse-db
  3. concourse-web None <nodes> 8080:32080/TCP 10m service=concourse-web
  4. concourse-worker None <none> 55555/TCP 12m service=concourse-worker

Troubleshooting

TSA Connection

Problemdescription:

I had a problems with the tsa connection from inside of the worker container.

Solution:
I fixed it with
replaceing the service selector name with the endpoint ip.

Getting the enpoint of the service concourse-web

  1. kubectl get endpoints | grep web
  2. concourse-web 1.1.1.80:8080 17h

and here the area which is have to change concourse-worker-deployment.yaml

  1. ...
  2. # use the endpoint ip, because the dns lookup point to the cluster ip and this ip is not reachable from inside the container
  3. # value: concourse-web
  4. value: 1.1.1.80
  5. ...