How to build k8s cluster on CentOS7

created at 04-06-2022 views: 27

1.Machine Information

[root@kube-gmg-03-master-1 ~]# uname -a
Linux kube-gmg-03-master-1 3.10.0-957.el7.x86_64 #1 SMP Thu Nov 8 23:39:32 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
[root@kube-gmg-03-master-1 ~]# cat /etc/redhat-release
CentOS Linux release 7.9.2009 (Core)

2.Host information

This article has prepared three machines for deploying the k8s operating environment. The details are as follows:

Nodes and Functions Hostname IP
Master、etcd、registry K8s-master 10.255.61.1
Node1 K8s-node-1 10.255.61.2
Node2 K8s-node-2 10.255.61.3

3.Set the hostname of the three machines

  • Execute on the Master:
hostnamectl --static set-hostname k8s-master
  • Execute on Node1:
hostnamectl --static set-hostname k8s-node-1
  • Execute on Node2:
hostnamectl --static set-hostname k8s-node-2

4.set the hosts

All three machines execute the following commands:

echo '10.255.61.1    k8s-master
10.255.61.1   etcd
10.255.61.1   registry
10.255.61.2   k8s-node-1
10.255.61.3    k8s-node-2' >> /etc/hosts

5.close the firewall on the three machines

systemctl disable firewalld.service
systemctl stop firewalld.service

6.Deploy etcd on three machines

k8s relies on etcd to run, you need to deploy etcd first, this article uses yum to install:

yum install etcd -y

The default configuration file for etcd installed by yum is in /etc/etcd/etcd.conf. Edit the configuration file and change the following colorized information:

vi /etc/etcd/etcd.conf
# [member]
ETCD_NAME=master
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
#ETCD_SNAPSHOT_COUNT="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#
#[cluster]
#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
#ETCD_INITIAL_CLUSTER="default=http://localhost:2380"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379,http://etcd:4001"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""

Start and verify status (start master's etcd first)

systemctl start etcd
etcdctl set testdir/testkey0 0
etcdctl get testdir/testkey0 
etcdctl -C http://etcd:4001 cluster-health
etcdctl -C http://etcd:2379 cluster-health

7.Deploy the master

7.1 Install docker

yum install docker -y

Configure the Docker profile to allow pulling images from the registry.

vim /etc/sysconfig/docker

/etc/sysconfig/docker

# Modify these options if you want to change the way the docker daemon runs
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false'
if [ -z "${DOCKER_CERT_PATH}" ]; then
    DOCKER_CERT_PATH=/etc/docker
fi
OPTIONS='--insecure-registry registry:5000'

Set the boot to start automatically and start the service

systemctl enable docker.service
systemctl start docker

7.2 Install kubernets

yum install kubernetes -y

7.3 Configure and start kubernetes

The following components need to be installed and run on the kubernetes master:

Kubernetes API Server
Kubernetes Controller Manager
Kubernetes Scheduler

7.4 Correspondingly, change the information in the following configurations:

$ vim /etc/kubernetes/apiserver

###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"

# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

# Add your own!
KUBE_API_ARGS=""
$ vim /etc/kubernetes/config

###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://k8s-master:8080"

7.5 Start the service and set the boot to start automatically, run the following command

 systemctl enable kube-apiserver.service
 systemctl start kube-apiserver.service
 systemctl enable kube-controller-manager.service
 systemctl start kube-controller-manager.service
 systemctl enable kube-scheduler.service
 systemctl start kube-scheduler.service

8.deploy node node

8.1 Install docker Refer and kubernets

Refer to 7.1 Install docker, refer to 7.2Install kubernets

8.2 The node node starts kubernets

The following components need to be running on the kubernetes node:

  • Kubelet
  • Kubernetes Proxy

8.2.1 Configuration files

$ vim /etc/kubernetes/config

###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://k8s-master:8080"
[root@K8s-node-1 ~]# vim /etc/kubernetes/kubelet

###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
# KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=k8s-node-1"

# location of the api-server
KUBELET_API_SERVER="--api-servers=http://k8s-master:8080"

# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

# Add your own!
KUBELET_ARGS=""

8.3 Start the service and set the boot to start automatically

systemctl enable kubelet.service
systemctl start kubelet.service
systemctl enable kube-proxy.service
systemctl start kube-proxy.service

8.4 View Status

View the nodes and node status in the cluster on the master

$  kubectl -s http://k8s-master:8080 get node
NAME         STATUS    AGE
k8s-node-1   Ready     3m
k8s-node-2   Ready     16s
$ kubectl get nodes
NAME         STATUS    AGE
k8s-node-1   Ready     3m
k8s-node-2   Ready     43s

So far, a kubernetes cluster has been built, but the cluster is not working well at present, please continue with the next steps.

9. Create an overlay network - Flannel

9.1 Install Flannel

Execute the following commands on both master and node to install

[root@k8s-master ~]# yum install flannel

9.2 Configuring Flannel

/etc/sysconfig/flanneld is edited on both master and node,

[root@k8s-master ~]# vi /etc/sysconfig/flanneld

# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""

9.3 Configuring the flannel key in etcd

Flannel uses Etcd for configuration to ensure configuration consistency between multiple Flannel instances, so the following configuration needs to be performed on etcd: (/atomic.io/network/config This key is the same as the above / The configuration itemFLANNEL_ETCD_PREFIXin etc/sysconfig/flannel is corresponding, if it is wrong, the startup will fail)

[root@k8s-master ~]# etcdctl mk /atomic.io/network/config '{ "Network": "10.0.0.0/16" }'
{ "Network": "10.0.0.0/16" }

9.4 Startup

After starting Flannel, you need to restart docker and kubernetes all the time.

Execute on master:

systemctl enable flanneld.service 
systemctl start flanneld.service 
service docker restart
systemctl restart kube-apiserver.service
systemctl restart kube-controller-manager.service
systemctl restart kube-scheduler.service

Execute on node:

systemctl enable flanneld.service 
systemctl start flanneld.service 
service docker restart
systemctl restart kubelet.service
systemctl restart kube-proxy.service
created at:04-06-2022
edited at: 04-06-2022: