k8s集群部署

2021/02/06 14:48
阅读数 113

1、实验环境
虚拟机:Oracle virtualbox 6.1
Linux版本:CentOS-7.5
Master:192.168.56.20
Node1:192.168.56.21
Node2:192.168.56.22




2、步骤
2.1 三台主机都要执行
1)关闭selinux
[root@master ~]#
[root@master ~]# setenforce 0
[root@master ~]# cat /etc/selinux/config | grep disabled | grep -v "^#"
SELINUX=disabled
[root@master ~]#






2)关闭系统交换分区
[root@master ~]# swapoff -a
[root@master ~]# cat /etc/fstab | grep swap #注释swap配置文件

/dev/mapper/centos-swap swap swap defaults 0 0

[root@master ~]#

3)开启流量转发功能
[root@master ~]# cat /etc/sysctl.conf | grep -v "^#" #加入以下3行内容

net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

[root@master ~]#
[root@master ~]# sysctl -p
net.ipv4.ip_forward = 1
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
[root@master ~]#




4)启用内核模块
[root@master ~]# vim /etc/sysconfig/modules/ipvs.modules
[root@master ~]# cat /etc/sysconfig/modules/ipvs.modules
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
[root@master ~]#
依次执行以下命令:
[root@master ~]# modprobe -- ip_vs
[root@master ~]# modprobe -- ip_vs_rr
[root@master ~]# modprobe -- ip_vs_wrr
[root@master ~]# modprobe -- ip_vs_sh
[root@master ~]# modprobe -- nf_conntrack_ipv4













5)关闭Linux防火墙
[root@master ~]# systemctl stop firewalld
[root@master ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@master ~]#




6)kubectl、kubeadm、kubelet 的安装
先添加K8S的镜像源:此实验为阿里云镜像源
[root@master ~]#
[root@master ~]# vim /etc/yum.repos.d/kubernetes.repo
[root@master ~]# cat /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
[root@master ~]#
安装三个软件包:
[root@master ~]# yum install kubectl kubeadm kubelet













7)启用kubelet服务
[root@master ~]# systemctl enable kubelet
[root@master ~]# systemctl start kubelet
[root@master ~]# systemctl status kubelet #报错是因为现在好没有搭建集群,所以配置文件当然找不到,这个报错不用管,后边搭建好集群以后就能正常启动了


8)安装docker
首先卸载旧的安装软件(因为这是新安装,所以这一步不用做)

yum remove docker \

              docker-client \
              docker-client-latest \
              docker-common \
              docker-latest \
              docker-latest-logrotate \
              docker-logrotate \
              docker-engine

接着安装依赖包:
yum install -y yum-utils

然后添加docker镜像源:
yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo


ll /etc/yum.repos.d/docker*

-rw-r--r--. 1 root root 1919 Feb 1 17:55 /etc/yum.repos.d/docker-ce.repo
[root@master ~]#

然后安装:

yum install docker-ce docker-ce-cli containerd.io

启动docker:

systemctl start docker

systemctl status docker

运行hello-world :

docker run hello-world

查看docker版本:

docker version

Client: Docker Engine - Community #客户端版本
Version: 20.10.3
API version: 1.41
Go version: go1.13.15
Git commit: 48d30b5
Built: Fri Jan 29 14:34:14 2021
OS/Arch: linux/amd64
Context: default
Experimental: true







Server: Docker Engine - Community #服务端版本
Engine:
Version: 20.10.3
API version: 1.41 (minimum version 1.12)
Go version: go1.13.15
Git commit: 46229ca
Built: Fri Jan 29 14:32:37 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.3
GitCommit: 269548fa27e0089a8b8278fc4fc781d7f65a939b
runc:
Version: 1.0.0-rc92
GitCommit: ff819c7e9184c13b7c2607fe6c30ae19403a7aff
docker-init:
Version: 0.19.0
GitCommit: de40ad0
#

















9)配置docker
首先配置 cgroup-driver为systemd

docker info | grep -i cgroup

Cgroup Driver: cgroupfs
Cgroup Version: 1

接着修改docker配置文件:

vim /etc/docker/daemon.json #编辑配置文件,默认应该是没有,创建即可

cat /etc/docker/daemon.json #查看写入的配置信息

{
"exec-opts": ["native.cgroupdriver=systemd"]
}
[root@master ~]#


然后重启docker:

systemctl restart docker

systemctl status docker

再次查看:

docker info | grep -i cgroup

Cgroup Driver: systemd
Cgroup Version: 1

查看docker当前存在的镜像:

docker images

REPOSITORY TAG IMAGE ID CREATED SIZE
hello-world latest bf756fb1ae65 13 months ago 13.3kB
[root@master ~]#

10)预先拉取所需要的镜像
注:7个镜像的获取方法以及修改方法是一样的,除了版本有一丢丢差异外,道理一样,所以只列举1个镜像的安装和修改过程,其他的省略……
先查看kubeadm所需的镜像:

kubeadm config images list

W0202 01:31:38.253362 16932 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get "https://storage.googleapis.com/kubernetes-release/release/stable-1.txt": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
W0202 01:31:38.253661 16932 version.go:102] falling back to the local client version: v1.20.2
k8s.gcr.io/kube-apiserver:v1.20.2
k8s.gcr.io/kube-controller-manager:v1.20.2
k8s.gcr.io/kube-scheduler:v1.20.2
k8s.gcr.io/kube-proxy:v1.20.2
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
[root@master ~]#








接着去dockerhub去找到上面红色字体的7个镜像并下载:
Docker Hub : https://hub.docker.com/

如 kube-apiserver镜像:先找到 kubeimage/kube-apiserver-amd64,然后点击 tags ,然后再找到对应版本的镜像,复制该镜像地址

拉取镜像:

docker pull kubeimage/kube-apiserver-amd64:v1.20.2

[root@master ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
kubeimage/kube-apiserver-amd64 v1.20.2 a8c2fdb8bf76 2 weeks ago 122MB
hello-world latest bf756fb1ae65 13 months ago 13.3kB
[root@master ~]#



修改镜像名称为系统需要的名称:

docker tag kubeimage/kube-apiserver-amd64:v1.20.2 k8s.gcr.io/kube-apiserver:v1.20.2

docker images

REPOSITORY TAG IMAGE ID CREATED SIZE
kubeimage/kube-apiserver-amd64 v1.20.2 a8c2fdb8bf76 2 weeks ago 122MB
k8s.gcr.io/kube-apiserver v1.20.2 a8c2fdb8bf76 2 weeks ago 122MB
hello-world latest bf756fb1ae65 13 months ago 13.3kB
[root@master ~]#



删除重复的镜像(系统用不到):

docker rmi kubeimage/kube-apiserver-amd64:v1.20.2 #删除这个镜像

Untagged: kubeimage/kube-apiserver-amd64:v1.20.2
Untagged: kubeimage/kube-apiserver-amd64@sha256:cfdd1ff3c1ba828f91603f0c41e06c8d29b774104d12be2d99e909672db009dd

docker images

REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-apiserver v1.20.2 a8c2fdb8bf76 2 weeks ago 122MB
hello-world latest bf756fb1ae65 13 months ago 13.3kB
[root@master ~]#


使用以上方法,将剩余的6个镜像全部拉取下来……

11)配置名称解析
3台服务器都要做,在配置文件中写入相同的配置

vim /etc/hosts

cat /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.56.20 master
192.168.56.21 node1
192.168.56.22 node2
]#


记住以上操作在master和两个node节点上都要做……

2.2 只在master端执行
master节点的初始化
先打开K8S网络扩展的网站:https://kubernetes.io/zh/docs/concepts/cluster-administration/addons/

然后选择 Flannel 是一个可以用于 Kubernetes 的 overlay 网络提供者 这个网络

出现如下页面:

点击红框 flannel 出现如下页面:

接着看一下它的应用方式:

然后打开这个链接,出现如下页面: https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

此时在打开的页面中搜索 0.0 ,出现如下页面:

然后复制 10.244.0.0/16 这个地址 :
[root@master ~]#
[root@master ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.20.2 --apiserver-advertise-address=192.168.56.20 #使用这条命令进行master节点的初始化

接着根据系统提示依次执行以下三条命令:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master ~]#
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master ~]#







查看以下当前的K8S集群中存在哪些节点:
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane,master 11m v1.20.2
[root@master ~]#



指定所有的命名空间:
[root@master ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-74ff55c5b-dn88c 0/1 Pending 0 13m
kube-system coredns-74ff55c5b-vt2k6 0/1 Pending 0 13m
kube-system etcd-master 1/1 Running 0 13m
kube-system kube-apiserver-master 1/1 Running 0 13m
kube-system kube-controller-manager-master 1/1 Running 0 13m
kube-system kube-proxy-mwvtw 1/1 Running 0 13m
kube-system kube-scheduler-master 1/1 Running 0 13m
[root@master ~]#









此时通过链接将
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml kube-flannel.yml 这个配置文件下载下来
[root@master ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@master ~]# ls
anaconda-ks.cfg kube-flannel.yml
[root@master ~]#




应用 kube-flannel.yml 这个配置文件:
[root@master ~]#
[root@master ~]# kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@master ~]#








再次指定所有的命名空间:发现比之前多了一个
[root@master ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-74ff55c5b-dn88c 0/1 Pending 0 25m
kube-system coredns-74ff55c5b-vt2k6 0/1 Pending 0 25m
kube-system etcd-master 1/1 Running 0 26m
kube-system kube-apiserver-master 1/1 Running 0 26m
kube-system kube-controller-manager-master 1/1 Running 0 26m
kube-system kube-flannel-ds-78cjc 0/1 Init:0/1 0 110s
kube-system kube-proxy-mwvtw 1/1 Running 0 25m
kube-system kube-scheduler-master 1/1 Running 0 26m
[root@master ~]#










然后执行如下命令:
[root@master ~]#
[root@master ~]# kubectl describe pods kube-flannel-ds-78cjc -n kube-system

此时发现该pod在运行中:

由上图可知,现在所有的master端pod都处于运行中,且准备好了,所以到此为止master端的初始化就完成了

2.3 两个node端的初始化
1)先去master端执行如下这条命令去获取node节点加入master节点的命令,然后再开始node节点初始化操作
[root@master ~]#
[root@master ~]# kubeadm token create --print-join-command
kubeadm join 192.168.56.20:6443 --token bksfb4.3yoh5zg86s73tlp1 --discovery-token-ca-cert-hash sha256:d9ced19c394d11b31af23190ff17eee1ebfe8c231accb1acb39bb2435ce3ad49
[root@master ~]#




接下来真正只在两个node端操作:
2)在node1端执行
[root@node1 ~]# kubeadm join 192.168.56.20:6443 --token bksfb4.3yoh5zg86s73tlp1 --discovery-token-ca-cert-hash sha256:d9ced19c394d11b31af23190ff17eee1ebfe8c231accb1acb39bb2435ce3ad49

3)在node2端执行
[root@node2 ~]#
[root@node2 ~]# kubeadm join 192.168.56.20:6443 --token bksfb4.3yoh5zg86s73tlp1 --discovery-token-ca-cert-hash sha256:d9ced19c394d11b31af23190ff17eee1ebfe8c231accb1acb39bb2435ce3ad49

3、k8s集群安装成果验证
在master端操作:
[root@master ~]#
[root@master ~]# watch kubectl get nodes -o wide #可以看到集群中现在又3个节点在运行


[root@master ~]#
[root@master ~]# watch kubectl get pods --all-namespaces #可以看到所有的pod都在运行

4、结论
K8S集群安装成功

k8s集群部署

展开阅读全文
加载中

作者的其它热门文章

打赏
0
0 收藏
分享
打赏
0 评论
0 收藏
0
分享
返回顶部
顶部