Kubernetes安装部署

K8S安装方式

1)kubeadm          k8s的安装工具
2)二进制安装         生产推荐、做集群
3)Ansible安装k8s    https://github.com/easzlab/kubeasz
4)阿里云 ACK
5)亚马逊云 EKS
6)Rancher

k8s部署策略

部署策略 描述
滚动更新 (Rolling Update) 用于更新应用程序的版本而不影响其可用性。在滚动更新过程中,Kubernetes逐步替换旧版本的Pod实例为新版本,确保应用程序在整个过程中保持可用性。
蓝绿部署 (Blue-Green Deployment) 通过在生产环境中同时部署两个完全相同的应用版本(蓝色版本和绿色版本),来实现零宕机的更新。在蓝绿部署中,流量首先被路由到蓝色版本,然后通过切换路由规则将流量逐渐转移到绿色版本。一旦绿色版本被确认为稳定可靠,可以将所有流量切换到绿色版本,并关闭蓝色版本。
金丝雀部署 (Canary Deployment) 金丝雀部署是一种逐步发布新功能的策略,在生产环境中逐渐将流量引导到新版本的Pod中,以验证新功能的稳定性和性能。在金丝雀部署中,一小部分流量被路由到新版本的Pod中,然后逐步增加流量的比例,直到所有流量都被路由到新版本。如果新版本出现问题,可以快速回滚到旧版本,以减少对用户的影响。
自适应部署 (Autoscaling Deployment) 自适应部署是一种根据应用程序的负载情况自动扩展或收缩Pod数量的策略,以满足变化的流量需求。Kubernetes提供了水平Pod自动缩放(HPA)和集群节点自动缩放(Cluster Autoscaler)等功能,可以根据CPU利用率、内存使用率等指标自动调整资源。自适应部署可以提高应用程序的弹性和可用性,同时最大限度地利用资源,减少成本。

环境准备

主机名 IP 地址 角色 配置 安装软件
master-1 10.0.0.201 master 1C4G API Server, Controller, Scheduler, Kube-proxy, Kubelet, etcd
node-1 10.0.0.202 node1 1C2G Docker, Kubelet, Kube-proxy
node-2 10.0.0.203 node2 1C2G Docker, Kubelet, Kube-proxy

网络规划

类型 IP
Pod IP 10.2.0.0
Cluster IP 10.1.0.0
Node IP 10.0.0.0

K8S部署先决条件

编辑kubelet配置文件

所有主机都要做

cat >/etc/sysconfig/kubelet <<EOF
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
EOF

修改内核参数

都要做

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.ipv4.ip_forward=1
EOF

换源

都做

#使用脚本跟换为腾讯源,并配置docker源
# 1.下载官方源
wget -O /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo
# 2.将官方源替换成清华源
sed -i 's+https://download.docker.com+https://mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repo

时间同步

都做

yum install -y chrony
systemctl start chronyd  

禁用Swap

都做

swapoff -a
sed -i '/swap/d' /etc/fstab

加载IPVS模块(K8S网络)

都要执行

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#! /bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

chmod +x /etc/sysconfig/modules/ipvs.modules
source /etc/sysconfig/modules/ipvs.modules

lsmod|grep -e 'ip_vs' -e 'nf_conntrack_ipv'

安装docker

都要装

yum install -y docker-ce-19.03.15 docker-ce-cli-19.03.15 containerd.io

镜像加速

都做

mkdir -p /etc/docker
cat >> /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://w3xui52l.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

systemctl daemon-reload
systemctl restart docker
systemctl enable docker

做解析

都做

cat >> /etc/hosts <<EOF
10.0.0.201 master-1
10.0.0.202 node-1
10.0.0.203 node-2
EOF

做免密

[root@master-1 ~]# ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa >/dev/null 2>&1
[root@master-1 ~]# ssh-copy-id -i ~/.ssh/id_dsa.pub root@10.0.0.202
[root@master-1 ~]# ssh-copy-id -i ~/.ssh/id_dsa.pub root@10.0.0.203

安装Kubeadm

# 1.更改yum源(都做)
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# 2.安装指定版本(都装)
yum install kubelet-1.19.3 kubeadm-1.19.3 kubectl-1.19.3 ipvsadm -y

kubelet-1.19.3 :Node节点上控制POD启动组件
kubeadm-1.19.3 :安装k8s的工具
kubectl-1.19.3 :K8S的命令
ipvsadm :IPVS模块要用的(LVS)

# 3.启动kubelet(都做)
systemctl start kubelet
systemctl enable kubelet

# 4.在master上执行初始化命令
kubeadm init \
--apiserver-advertise-address=10.0.0.201 \
--image-repository registry.aliyuncs.com/google_containers  \
--kubernetes-version=v1.19.3 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.2.0.0/16 \
--service-dns-domain=cluster.local \
--ignore-preflight-errors=Swap \
--ignore-preflight-errors=NumCPU

------------------------------------
#初始化成功
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:
# 5.配置文件(直接执行)
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

# pod需要网络(flannel)
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

# 6.node节点加入集群(所有node节点上)
Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.0.201:6443 --token fou73f.0kswjbrzwsuzq4p4 \
    --discovery-token-ca-cert-hash sha256:306e2b18da7905b9dbe32bb47e8ea4d31e82b756365fbe41d390fe3c17874b48 

------------------------------------

# 7.查看集群
[root@master-1 ~]# kubectl get node
NAME       STATUS     ROLES    AGE     VERSION
master-1   NotReady   master   11m     v1.19.3
node-1     NotReady   <none>   16s     v1.19.3
node-2     NotReady   <none>   2m24s   v1.19.3

## 忘记加入集群的命令
[root@master-1 ~]# kubeadm token create --print-join-command
## 如何重新初始化
[root@master-1 ~]# kubeadm reset
[root@node-2 ~]# kubeadm reset
[root@node-2 ~]# rm -f /etc/kubernetes/kubelet.conf
[root@node-2 ~]# rm -f /etc/kubernetes/bootstrap-kubelet.conf
[root@node-2 ~]# rm -f /etc/kubernetes/pki/ca.crt
[root@node-2 ~]# systemctl restart kubelet
kubeadm join 10.0.0.201:6443 --token qsnqjj.bgx7v5mwvgxq0r69 \
--discovery-token-ca-cert-hash sha256:da8e0dd30a022ccc84c9fe36e58ec953f12a804c7a52a903e639987daec1be05

修改网络模式

[root@master-1 ~]# kubectl edit cm kube-proxy -n kube-system
将 mode: "" 改为 mode: "ipvs"

#重启 kube-proxy
[root@master-1 ~]# kubectl -n kube-system get pod|grep kube-proxy|awk '{print "kubectl -n kube-system delete pod "$1}'|bash

部署flannel

[root@master-1 ~]# wget https://github.com/flannel-io/flannel/archive/refs/heads/master.zip
[root@master-1 ~]# unzip master.zip
[root@master-1 flannel-master]# cd /root/flannel-master/Documentation/

## 修改flannel的资源清单
# 1)修改网络 规划了POD IP 10.2.0.0/16
[root@master-1 Documentation]# vim kube-flannel.yml
"Network": "10.2.0.0/16"
# 2)启动flannel 绑定eth0网卡
containers:
- name: kube-flannel
image: docker.io/flannel/flannel:v0.24.0
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
- --iface=eth0

# 应用资源清单
[root@master-1 Documentation]# kubectl apply -f kube-flannel.yml

# 查看flannel资源
[root@master-1 Documentation]# kubectl get pod -n kube-flannel
NAME                    READY   STATUS    RESTARTS   AGE
kube-flannel-ds-rrmrx   1/1     Running   0          5m26s
kube-flannel-ds-wfvx4   1/1     Running   0          5m26s
kube-flannel-ds-xm9pr   1/1     Running   0          5m26s

# 查看节点信息
[root@master-1 Documentation]# kubectl get node
NAME       STATUS   ROLES    AGE   VERSION
master-1   Ready    master   68m   v1.19.3
node-1     Ready    <none>   57m   v1.19.3
node-2     Ready    <none>   59m   v1.19.3

# 查看之前CoreDNS
[root@master-1 ~]# kubectl get pod -n kube-system
NAME                               READY   STATUS    RESTARTS   AGE
coredns-6d56c8448f-bp765           1/1     Running   0          135m
coredns-6d56c8448f-w7nkh           1/1     Running   0          135m
etcd-master-1                      1/1     Running   0          135m
kube-apiserver-master-1            1/1     Running   1          135m
kube-controller-manager-master-1   1/1     Running   2          135m
kube-proxy-9z7rh                   1/1     Running   0          93m
kube-proxy-nnvlg                   1/1     Running   0          94m
kube-proxy-r9wkj                   1/1     Running   0          93m
kube-scheduler-master-1            1/1     Running   2          135m

node节点打角色标签

[root@master-1 ~]# kubectl label nodes node-1 node-role.kubernetes.io/node01=
[root@master-1 ~]# kubectl label nodes node-2 node-role.kubernetes.io/node02=
[root@master-1 ~]# kubectl get node
NAME       STATUS   ROLES    AGE    VERSION
master-1   Ready    master   139m   v1.19.3
node-1     Ready    node01   128m   v1.19.3
node-2     Ready    node02   131m   v1.19.3
暂无评论

发送评论 编辑评论


				
上一篇
下一篇