Ansible play for creating a bare-metal Kubernetes cluster on CentOS 7 from scratch.
Ansible play for creating a bare-metal Kubernetes cluster on CentOS 7 from scratch.
A collection of Ansible assets to set up a single master, multi minion cluster on bare metal (or virtualized) CentOS 7 environments. Running this play, you will end up with a simple but fully working Kubernetes cluster. I have created this mainly for my own entertainment, so your mileage may vary.
It is designed rather modular and contains the following roles that can be reused in own plays:
It started out from the tutorial at https://www.digitalocean.com/community/tutorials/how-to-create-a-kubernetes-1-10-cluster-using-kubeadm-on-centos-7.
The play is designed to be as simple as possible, and should be considered only a starting point. Do yourself a favour and do not use this to set up production clusters.
All features are optional and can be toggled on/off depending on your requirements.
Supported CNI/networking modules:
Cluster features:
group_vars/kubernetes-cluster
, see below)ansible-playbook
to execute kubernetes-cluster.yml
. The tasks require super user privileges on the remote hostsIf you want to deinstall Kubernetes from your nodes completely, you can do so by supplying the variable kubernetes_cleanup
to the play’s execution, e.g. as so:
$ ansible-playbook -u root kubernetes-cluster.yml -e kubernetes_cleanup=true
This will revert (almost) all changes done by the play and deinstall all Kubernetes components from the master and worker nodes. If you want the nodes to be rebooted after clean up, also use -e reboot=true
as argument.
If you also want to revert what the GlusterFS configuration did (i.e. clean up all storage), specify -e cleanup_storage=yes
as well. Be careful! This will destroy all data and partitions on the block device you specified as kubernetes.features.glusterfs.device
.
Attention: By default, the play will continue with a re-installation after running the clean up tasks. If you just want to cleanup, tell Ansible to only execute tasks tagged cleanup
, e.g. by specifying --tags cleanup
on the command line.
Before you begin, the following hostgroups must be defined in your Ansible inventory:
# K8S - master node
[kubernetes-master]
k8s-master.example.com
# K8S - minion nodes
[kubernetes-worker]
k8s-node-001.example.com
k8s-node-002.example.com
k8s-node-003.example.com
# ... add more worker nodes to your liking
# K8S - complete cluster
[kubernetes-cluster:children]
kubernetes-master
kubernetes-worker
Of course, all the nodes should already be bootstrapped (i.e. user account(s) set up, SSH keys deployed, etc.). These tasks are all beyond the scope of this play.
All nodes configured above should be able to fetch stuff from the Internet via HTTPS, at least through a HTTP proxy server. Offline installations are not supported.
Additionally, if you intend to use the glusterfs feature, you must provide a dedicated block device on all kubernetes-worker
node. The name of this device should be similar on each node.
Most of the global configuration is done via hostgroup variables, in group_vars/kubernetes-cluster.example
. Rename this file to group_vars/kubernetes-cluster
and edit it to reflect your desired setup. It will add a global dict object named kubernetes
to your Ansible variables namespace, so don’t use that name anywhere else when working with the configured hosts.
Additionally to the Ansible inventory set-up, the cluster’s worker nodes need to be defined in kubernetes.cluster.worker_nodes
. You need to specify the FQDN and the (primary) IP addresses of the cluster nodes.
kubernetes.cluster.network.type
to your desired network overlay plugin (currently, only flannel
is supported).kubernetes.cluster.network.cidr
to the overlay network you want to allocate addresses from for your podsThe following are boolean values that specify what features you want to be enabled during setup
kubernetes.cluster.feature_toggles.metallb
defines whether you want a MetalLB setup in your clusterkubernetes.cluster.feature_toggles.glusterfs
defines whether you want a GlusterFS setup in your cluster with Heketi provisioningEach feature must then be configured individually in the kubernetes.features
section.
The plays require a non-privileged user for performing the Kubernetes cluster tasks. This user is configured in the kubernetes.cluster.k8s_user
variable. This user can either be managed or unmanaged by this plays. In managed mode, the user and group will be created along with any supplemental configuration (like required SSH keys). Also, in managed mode this user will be deleted (along with its home directory!) from all nodes when running cleanup tasks. Likewise, in unmanaged mode, this plays will assume an already existing user on the nodes and will not touch it during cleanup.
For the GlusterFS cluster to work, you need at least three worker nodes for your cluster. Each of the nodes must have a dedicated partition (or whole disk) available for Gluster to use, and they should ideally be of the same size on all nodes. All configurables are found in the kubernetes.features.glusterfs
map:
namespace
defines the namespace in your Kubernetes cluster to install the componentsdevice
is the name of the block device to use for clustered storage (must be same across all nodes!)storageclass_name
is optional, and if given, is the name of the StorageClass object created for automatic volume provisioning within the Kubernetes clusterheketi_endpoint_clusterip
defines the ClusterIP resource to allocate and bind the Heketi API to. This will be the endpoint you set HEKETI_CLI_SERVER
to when talking with the API.As mentioned above, the Kubernetes cluster set up by this play should not be used as-is for production purposes.
Some important things to keep in mind:
Yes.