前言
至少准备三台centos服务器,其中一台为master节点,两台work节点;centos系统版本为7.5或以上版本;我这里使用的是7.9,除此之外,还需要一些额外的条件
- 至少2核2G的配置(单核不行的,我试过了)
一、k8s环境准备
运行k8s的服务需要具备以下条件
- 必须是基于Debian和Red Hat的linux发行版以及一些不提供句管理的发行版,这些系统才提供通用指令
- 每台主机至少具备2G内存;2核CPU
- 最好关闭防火墙
- 节点中不能有重复的主机名、mac地址或product_uuid;
接下来,在所有的节点中配置和安装以下几项
1、每个系统都设置唯一的静态ip
用vi编辑器打开网卡配置/etc/sysconfig/network-scripts/ifcfg-ens33
# 将 BOOTPROTO 改为static
BOOTPROTO=static
# 添加ip、网关和DNS地址,网关可以通过命令:“netstat -rn” 查看
IPADDR=192.168.253.131
GATEWAY=192.168.253.2
DNS1=8.8.8.8
2、时间同步
k8s要求集群中的节点必须精确一致,所以直接使用chronyd从网络同步时间
# 启动chronyd服务
systemctl start chronyd
# 设为开机自启
systemctl enable chronyd
# 查看当前时间
date
3、重新设置主机名
在k8s中, 主机名不能重复,所以将其设为不一样的节点
# 主节点
hostnamectl set-hostname master
# 工作节点1
hostnamectl set-hostname node1
# 工作节点2
hostnamectl set-hostname node2
4、设置hosts域名映射
在所有节点上执行以下命令,在hosts文件中添加一个指向主节点的域名,这里不要照抄,要将ip改成你自己的ip
# 在所有节点执行
echo "192.168.253.131 cluster-endpoint" >> /etc/hosts
在初始化的时候需要用到master,所以需要hosts加入以下配置,对应的主机名要和上面第三步的一样
# 以下操作只在主节点操作
echo "192.168.253.131 master" >> /etc/hosts
echo "192.168.253.132 node1" >> /etc/hosts
echo "192.168.253.133 node2" >> /etc/hosts
4、禁用SELINUX
SELinux或Security-Enhanced Linux是提供访问控制安全策略的机制或安全模块,简而言之,它是一项功能或服务,用于将用户限制为系统管理员设置的某些政策和规则。 这里需要将SELINUX设为permissive 模式(相当于禁用)
# 临时禁用,重启后复原,也可以用 setenforce Permissive 命令,效果是一样的
setenforce 0
# 永久禁用,
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
5、关闭swap分区
swap分区是虚拟内存,Swap分区在系统的物理内存不够用的时候,把硬盘内存中的一部分空间释放出来,以供当前运行的程序使用,关闭虚拟内存可以提高k8s的性能;
#临时关闭swap分区, 重启失效;
swapoff -a
#永久关闭swap分区
sed -ri 's/.*swap.*/#&/' /etc/fstab
6、允许 iptable 检查桥接流量
网卡上可能有ipv6的流量,将ipv6的流量桥接到ipv4网卡上,方便统计;
将以下内容一起复制到命令行执行
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
让以上配置生效
sudo sysctl --system
7、禁用iptables 和 防火墙服务
k8s和docker在运行中会产生大量的iptables规则, 为了不让系统规则跟它们混淆,直接关闭系统的规则
# 关闭iptables,没这个服务可忽略
systemctl stop iptables
systemctl disable iptables
# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
8、配置ipvs功能
在Kubernetes中Service有两种带来模型,一种是基于iptables的,一种是基于ipvs的两者比较的话,ipvs的性能明显要高一些,但是如果要使用它,需要手动载入ipvs模块
1.安装ipset和ipvsadm
yum install ipset ipvsadmin -y
2.添加需要加载的模块写入脚本文件
cat <<EOF> /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
3.为脚本添加执行权限
chmod +x /etc/sysconfig/modules/ipvs.modules
4.执行脚本文件
/bin/bash /etc/sysconfig/modules/ipvs.modules
5.查看对应的模块是否加载成功
lsmod | grep -e ip_vs -e nf_conntrack_ipv4
二、安装docker
首先,我们需要在每台服务器上都安装docker的运行环境;
1、卸载原docker
# 查看已安装的docker
yum list installed | grep docker
# 卸载docker相关组件
yum -y remove docker*
# 除了docker之外,也需要将containerd.io 卸载,这是容器相关的组件
docker -y remove containerd.io.x86_64
# 删除docker目录
rm -rf /var/lib/docker
2、使用国内的阿里云镜像仓
# 使用国内的阿里云镜像仓库-- 比较快 ,建议使用这个
yum-config-manager \
--add-repo \
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
3、安装指定版本的docker
yum install \
docker-ce-20.10.7 docker-ce-cli-20.10.7 containerd.io-1.4.6 -y
4、使用systemd代替cgroupfs、以及配置仓库镜像地址
docker在默认情况下使用Cgroup Driver 为cgroupfs,而k8s推荐使用systemd来代替cgroupfs,所以在/etc/docker/daemon.json
内加入以下内容,如果没有这个文件,手动创建一个
{
"registry-mirrors": ["https://q2hy3fzi.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
5、启动docker
# 第一种:启动docker
systemctl start docker
# 第二种:启动docker并设为开机启动
systemctl enable docker --now
三、安装k8s三大件
以下操作需要在所有的节点上进行
1、 安装 kubelet 、kubeadm、 kubectl
如果使用k8s官网上的地址,将会导致下载很慢, 所以这里建议使用阿里镜像安装
# 配置源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
说明
enabled=1
: 开启gpgcheck=0
: 是否开启gpg签名,1开启,0关闭repo_gpgcheck=0
: 是否检查元数据信息文件的签名信息与完整性,1开启,0关闭
2、安装三大件
--disableexcludes=kubernetes
表示排除禁用
yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes
3、添加配置
在/etc/sysconfig/kubelet
文件中加入以下内容
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"
说明
- k8s也使用systemd来代替cgroupfs
- k8s默认代理模式是iptables ,现改为ipvs,据说ipvs性能更高;也可以不设置
4、启动kubelet
# 开机自动启动
systemctl enable --now kubelet
四、使用kubeadm引导集群
理论上以下操作只需要在master节点进行
1、下载各个机器需要的集群
初始化主节点之前,需要先下载一些组件,但是这些组件都在官网上,所以我们国内是无法下载的,所以一我们将官网的镜像改成阿里云的镜像,以下命令执行后会在当前目录得到一个 images.sh
的脚本文件,执行后会在docker中下载一些需要的组件,==理论上这些只需要在主节点上执行,但是为了保险起见,我们在所有节点都安装一下==
sudo tee ./images.sh <<-'EOF'
#!/bin/bash
images=(
kube-apiserver:v1.20.9
kube-proxy:v1.20.9
kube-controller-manager:v1.20.9
kube-scheduler:v1.20.9
coredns:1.7.0
etcd:3.4.13-0
pause:3.2
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName
done
EOF
设置可执行权限并执行脚本
chmod +x ./images.sh && ./images.sh
五、主节点环境准备
注意:以下的配置和安装是主节点才有的东西,所以只在master节点上进行,不要在work节点上操作;
1、初始化主节点
使用init组件快速初始化一个主节点
kubeadm init \
--apiserver-advertise-address=192.168.253.131 \
--control-plane-endpoint=cluster-endpoint \
--image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
--kubernetes-version v1.20.9 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=192.168.0.0/16
说明
--apiserver-advertise-address
:api-server分发地址,这里一定要写主节点的ip地址--control-plane-endpoint
:控制端域名,这里写刚刚在hosts文件配置的主节点域名--image-repository
:镜像仓库,使用国内镜像会快一些--kubernetes-version
:版本号,跟着k8s的版本走就行,也可以自定义--service-cidr
:service网络范围--pod-network
: pod网络范围--ignore-preflight-errors=all
:忽略检查错误,Kubernetes对GPU要求至少是2核,2G内存。如果你是只有1核的cpu或者只有1G内存的话就无法继续安装,all
表示忽略所有检查;这样就算你的单核cpu也可以安装k8s,因为博主的云服务器是单核2G的,所以需要加上这个配置
安装过程需等待几分钟, 如显示以下信息则表示安装成功,但是这还没完,如果要使用k8s集群,还需要按照下面的步骤继续执行命令;
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join cluster-endpoint:6443 --token ppwpeo.286k19gvjdlelen8 \
--discovery-token-ca-cert-hash sha256:1e402bf817b1f8f2ade7aeb0a702c389903a96e72724517409793e7b4904ee72 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join cluster-endpoint:6443 --token ppwpeo.286k19gvjdlelen8 \
--discovery-token-ca-cert-hash sha256:1e402bf817b1f8f2ade7aeb0a702c389903a96e72724517409793e7b4904ee72
2、初始化后续
根据上面的提示继续进行配置,如果需要使用集群,还需要执行以下命令,
==记住,这里的命令是从第一步初始化成功后拷贝过来的命令,应该拷贝你的命令来执行==
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
3、work节点加入集群(安装令牌,==只在工作节点执行==)
如果要在集群中加入工作节点,那么需要在工作节点执行以下命令,这些是集群的令牌
==记住,这里的命令是从第一步初始化成功后拷贝过来的命令,应该拷贝你的命令来执行==
以下操作只在work节点执行即可
kubeadm join cluster-endpoint:6443 --token ppwpeo.286k19gvjdlelen8 \
--discovery-token-ca-cert-hash sha256:1e402bf817b1f8f2ade7aeb0a702c389903a96e72724517409793e7b4904ee72
加入后我们在主节点执行kubectl get nodes
命令,可以看到,除了主节点之外,还有2个工作节点,在看它们的状态都是NotReady(未准备好)的,因为还没安装网络插件,这刚好是我们下一步要做的事;
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane,master 4h5m v1.20.9
node1 NotReady <none> 98m v1.20.9
node2 NotReady <none> 7s v1.20.9
需要注意的是,这个令牌是24小时内有效,超过24小时就要重新生成了;在master
节点执行以下命令即可创建一个新的令牌
# 这个命令一定要在主节点执行才能生成令牌
kubeadm token create --print-join-command
4、安装pod网络插件(==只在主节点执行==)
以下操作只在master节点执行即可
网络插件有好几种,可以翻墙的童鞋可以通过官网下载配置进行安装:https://kubernetes.io/docs/concepts/cluster-administration/addons/
;
在这里我们使用calico插件进行安装
# 下载 calico-etcd.yaml 到当前目录
wget https://docs.projectcalico.org/v3.10/manifests/calico.yaml
#因为是国外网站,考虑到有些童鞋无法访问,现将calico.yaml的文件内容放在结尾处,需要的童鞋自行复制到文件即可
# 加载网络插件
kubectl apply -f calico-etcd.yaml
安装成功后,会显示以下信息表示安装完成
[root@master ~]# kubectl apply -f calico-etcd.yaml
secret/calico-etcd-secrets configured
configmap/calico-config configured
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node unchanged
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers unchanged
poddisruptionbudget.policy/calico-kube-controllers created
再过几分钟,就可以看到所有的节点都已经ready了
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 5h5m v1.20.9
node1 Ready <none> 158m v1.20.9
node2 Ready <none> 59m v1.20.9
六、自运行演示
k8s集群里面最厉害的一点就是自运行,也就是说当集群中的机器因为某些原因宕机了,重启之后它会自动运行起来,保证高可用,下面我们就来测试一下, 输入以下命令重启三台服务器
reboot
过了几分钟后,我们发现k8s机器已经恢复好了,都是已就绪(Ready)状态,这就是自运行
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 15h v1.20.9
node1 Ready <none> 13h v1.20.9
node2 Ready <none> 11h v1.20.9
七、创建一个nginx
执行以下命令即可(在master节点执行)
# 部署nginx,创建名为nginx的pob控制器
kubectl create deployment nginx --image=nginx:1.14-alpine
# 暴露端口
kubectl expose deployment nginx --port=80 --type=NodePort
执行后发现,容器正在创建中,ContainerCreating
表示容器正在创建中
^C[root@master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-65c4bffcb6-q64cf 0/1 ContainerCreating 0 2m47s
在等个几分钟就创建好了,Running
表示容器正在运行中
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-65c4bffcb6-q64cf 1/1 Running 0 7h44m
接下来看看映射的端口是哪一个,由下面的命令结果可以看到,端口号为30115
;
[root@master ~]# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 27h
nginx NodePort 10.96.149.130 <none> 80:30115/TCP 7h43m
因为我们的集群有三个,所以我们访问这三个ip + 端口都是可以访问nginx的
# 以下三个链接都能访问nignx
http://192.168.253.131:30115/
http://192.168.253.132:30115/
http://192.168.253.133:30115/
既然已经部署好了,又能访问,那么这个nginx部署到哪个节点上了呢?有2种方法可以看到
第一种、可以用describe指令看到
kubectl describe pod nginx-65c4bffcb6-q64cf
# 在最尾部可以看到`Events`的事件信息,带`#`号是博主加上去的注释内容
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
# 拉取nginx镜像
Normal Pulling 33s kubelet Pulling image "nginx:1.14-alpine"
# 已成功将 default/nginx-65c4bffcb6-q64cf 分配给节点node2 ,default是namespace,nginx-65c4bffcb6-q64cf 是pod
Normal Scheduled 51s default-scheduler Successfully assigned default/nginx-65c4bffcb6-q64cf to node2
# nginx镜像拉取成功,拉取时长:25.96447344秒
Normal Pulled 2s kubelet Successfully pulled image "nginx:1.14-alpine" in 25.96447344s
# 创建nginx的容器
Normal Created 50s kubelet Created container nginx
# 运行nginx容器
Normal Started 50s kubelet Started container nginx
第二种、通过-o wide
指令查看
kubectl get pod -o wide -n default
执行后结果如下,可以看到NODE
那一栏就是节点的名称了
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-65c4bffcb6-fdjhv 1/1 Running 0 4h53m 192.168.104.8 node2 <none> <none>
完
整个安装过程比较简单,但是一步都不能错,只要前面错一步就会导致k8s集群运行过程中会出现各种各样的问题!
八、calico.yaml
此内容根据链接下载得来:wget https://docs.projectcalico.org/v3.10/manifests/calico.yaml
;
---
# Source: calico/templates/calico-config.yaml
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
name: calico-config
namespace: kube-system
data:
# Typha is disabled.
typha_service_name: "none"
# Configure the backend to use.
calico_backend: "bird"
# Configure the MTU to use
veth_mtu: "1440"
# The CNI network configuration to install on each node. The special
# values in this config will be automatically populated.
cni_network_config: |-
{
"name": "k8s-pod-network",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "calico",
"log_level": "info",
"datastore_type": "kubernetes",
"nodename": "__KUBERNETES_NODE_NAME__",
"mtu": __CNI_MTU__,
"ipam": {
"type": "calico-ipam"
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"kubeconfig": "__KUBECONFIG_FILEPATH__"
}
},
{
"type": "portmap",
"snat": true,
"capabilities": {"portMappings": true}
}
]
}
---
# Source: calico/templates/kdd-crds.yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: felixconfigurations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: FelixConfiguration
plural: felixconfigurations
singular: felixconfiguration
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ipamblocks.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPAMBlock
plural: ipamblocks
singular: ipamblock
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: blockaffinities.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BlockAffinity
plural: blockaffinities
singular: blockaffinity
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ipamhandles.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPAMHandle
plural: ipamhandles
singular: ipamhandle
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ipamconfigs.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPAMConfig
plural: ipamconfigs
singular: ipamconfig
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: bgppeers.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BGPPeer
plural: bgppeers
singular: bgppeer
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: bgpconfigurations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BGPConfiguration
plural: bgpconfigurations
singular: bgpconfiguration
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ippools.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPPool
plural: ippools
singular: ippool
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: hostendpoints.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: HostEndpoint
plural: hostendpoints
singular: hostendpoint
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: clusterinformations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: ClusterInformation
plural: clusterinformations
singular: clusterinformation
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: globalnetworkpolicies.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalNetworkPolicy
plural: globalnetworkpolicies
singular: globalnetworkpolicy
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: globalnetworksets.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalNetworkSet
plural: globalnetworksets
singular: globalnetworkset
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: networkpolicies.crd.projectcalico.org
spec:
scope: Namespaced
group: crd.projectcalico.org
version: v1
names:
kind: NetworkPolicy
plural: networkpolicies
singular: networkpolicy
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: networksets.crd.projectcalico.org
spec:
scope: Namespaced
group: crd.projectcalico.org
version: v1
names:
kind: NetworkSet
plural: networksets
singular: networkset
---
# Source: calico/templates/rbac.yaml
# Include a clusterrole for the kube-controllers component,
# and bind it to the calico-kube-controllers serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-kube-controllers
rules:
# Nodes are watched to monitor for deletions.
- apiGroups: [""]
resources:
- nodes
verbs:
- watch
- list
- get
# Pods are queried to check for existence.
- apiGroups: [""]
resources:
- pods
verbs:
- get
# IPAM resources are manipulated when nodes are deleted.
- apiGroups: ["crd.projectcalico.org"]
resources:
- ippools
verbs:
- list
- apiGroups: ["crd.projectcalico.org"]
resources:
- blockaffinities
- ipamblocks
- ipamhandles
verbs:
- get
- list
- create
- update
- delete
# Needs access to update clusterinformations.
- apiGroups: ["crd.projectcalico.org"]
resources:
- clusterinformations
verbs:
- get
- create
- update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-kube-controllers
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-kube-controllers
subjects:
- kind: ServiceAccount
name: calico-kube-controllers
namespace: kube-system
---
# Include a clusterrole for the calico-node DaemonSet,
# and bind it to the calico-node serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-node
rules:
# The CNI plugin needs to get pods, nodes, and namespaces.
- apiGroups: [""]
resources:
- pods
- nodes
- namespaces
verbs:
- get
- apiGroups: [""]
resources:
- endpoints
- services
verbs:
# Used to discover service IPs for advertisement.
- watch
- list
# Used to discover Typhas.
- get
- apiGroups: [""]
resources:
- nodes/status
verbs:
# Needed for clearing NodeNetworkUnavailable flag.
- patch
# Calico stores some configuration information in node annotations.
- update
# Watch for changes to Kubernetes NetworkPolicies.
- apiGroups: ["networking.k8s.io"]
resources:
- networkpolicies
verbs:
- watch
- list
# Used by Calico for policy information.
- apiGroups: [""]
resources:
- pods
- namespaces
- serviceaccounts
verbs:
- list
- watch
# The CNI plugin patches pods/status.
- apiGroups: [""]
resources:
- pods/status
verbs:
- patch
# Calico monitors various CRDs for config.
- apiGroups: ["crd.projectcalico.org"]
resources:
- globalfelixconfigs
- felixconfigurations
- bgppeers
- globalbgpconfigs
- bgpconfigurations
- ippools
- ipamblocks
- globalnetworkpolicies
- globalnetworksets
- networkpolicies
- networksets
- clusterinformations
- hostendpoints
- blockaffinities
verbs:
- get
- list
- watch
# Calico must create and update some CRDs on startup.
- apiGroups: ["crd.projectcalico.org"]
resources:
- ippools
- felixconfigurations
- clusterinformations
verbs:
- create
- update
# Calico stores some configuration information on the node.
- apiGroups: [""]
resources:
- nodes
verbs:
- get
- list
- watch
# These permissions are only requried for upgrade from v2.6, and can
# be removed after upgrade or on fresh installations.
- apiGroups: ["crd.projectcalico.org"]
resources:
- bgpconfigurations
- bgppeers
verbs:
- create
- update
# These permissions are required for Calico CNI to perform IPAM allocations.
- apiGroups: ["crd.projectcalico.org"]
resources:
- blockaffinities
- ipamblocks
- ipamhandles
verbs:
- get
- list
- create
- update
- delete
- apiGroups: ["crd.projectcalico.org"]
resources:
- ipamconfigs
verbs:
- get
# Block affinities must also be watchable by confd for route aggregation.
- apiGroups: ["crd.projectcalico.org"]
resources:
- blockaffinities
verbs:
- watch
# The Calico IPAM migration needs to get daemonsets. These permissions can be
# removed if not upgrading from an installation using host-local IPAM.
- apiGroups: ["apps"]
resources:
- daemonsets
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: calico-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-node
subjects:
- kind: ServiceAccount
name: calico-node
namespace: kube-system
---
# Source: calico/templates/calico-node.yaml
# This manifest installs the calico-node container, as well
# as the CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: calico-node
namespace: kube-system
labels:
k8s-app: calico-node
spec:
selector:
matchLabels:
k8s-app: calico-node
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
k8s-app: calico-node
annotations:
# This, along with the CriticalAddonsOnly toleration below,
# marks the pod as a critical add-on, ensuring it gets
# priority scheduling and that its resources are reserved
# if it ever gets evicted.
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
nodeSelector:
beta.kubernetes.io/os: linux
hostNetwork: true
tolerations:
# Make sure calico-node gets scheduled on all nodes.
- effect: NoSchedule
operator: Exists
# Mark the pod as a critical add-on for rescheduling.
- key: CriticalAddonsOnly
operator: Exists
- effect: NoExecute
operator: Exists
serviceAccountName: calico-node
# Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
# deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
terminationGracePeriodSeconds: 0
priorityClassName: system-node-critical
initContainers:
# This container performs upgrade from host-local IPAM to calico-ipam.
# It can be deleted if this is a fresh installation, or if you have already
# upgraded to use calico-ipam.
- name: upgrade-ipam
image: calico/cni:v3.10.4
command: ["/opt/cni/bin/calico-ipam", "-upgrade"]
env:
- name: KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: CALICO_NETWORKING_BACKEND
valueFrom:
configMapKeyRef:
name: calico-config
key: calico_backend
volumeMounts:
- mountPath: /var/lib/cni/networks
name: host-local-net-dir
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
# This container installs the CNI binaries
# and CNI network config file on each node.
- name: install-cni
image: calico/cni:v3.10.4
command: ["/install-cni.sh"]
env:
# Name of the CNI config file to create.
- name: CNI_CONF_NAME
value: "10-calico.conflist"
# The CNI network config to install on each node.
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: calico-config
key: cni_network_config
# Set the hostname based on the k8s node name.
- name: KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# CNI MTU Config variable
- name: CNI_MTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# Prevents the container from sleeping forever.
- name: SLEEP
value: "false"
volumeMounts:
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
- mountPath: /host/etc/cni/net.d
name: cni-net-dir
# Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes
# to communicate with Felix over the Policy Sync API.
- name: flexvol-driver
image: calico/pod2daemon-flexvol:v3.10.4
volumeMounts:
- name: flexvol-driver-host
mountPath: /host/driver
containers:
# Runs calico-node container on each Kubernetes node. This
# container programs network policy and routes on each
# host.
- name: calico-node
image: calico/node:v3.10.4
env:
# Use Kubernetes API as the backing datastore.
- name: DATASTORE_TYPE
value: "kubernetes"
# Wait for the datastore.
- name: WAIT_FOR_DATASTORE
value: "true"
# Set based on the k8s node name.
- name: NODENAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# Choose the backend to use.
- name: CALICO_NETWORKING_BACKEND
valueFrom:
configMapKeyRef:
name: calico-config
key: calico_backend
# Cluster type to identify the deployment type
- name: CLUSTER_TYPE
value: "k8s,bgp"
# Auto-detect the BGP IP address.
- name: IP
value: "autodetect"
# Enable IPIP
- name: CALICO_IPV4POOL_IPIP
value: "Always"
# Set MTU for tunnel device used if ipip is enabled
- name: FELIX_IPINIPMTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
- name: CALICO_IPV4POOL_CIDR
value: "192.168.0.0/16"
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
# Set Felix endpoint to host default action to ACCEPT.
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
value: "ACCEPT"
# Disable IPv6 on Kubernetes.
- name: FELIX_IPV6SUPPORT
value: "false"
# Set Felix logging to "info"
- name: FELIX_LOGSEVERITYSCREEN
value: "info"
- name: FELIX_HEALTHENABLED
value: "true"
securityContext:
privileged: true
resources:
requests:
cpu: 250m
livenessProbe:
exec:
command:
- /bin/calico-node
- -felix-live
- -bird-live
periodSeconds: 10
initialDelaySeconds: 10
failureThreshold: 6
readinessProbe:
exec:
command:
- /bin/calico-node
- -felix-ready
- -bird-ready
periodSeconds: 10
volumeMounts:
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- mountPath: /run/xtables.lock
name: xtables-lock
readOnly: false
- mountPath: /var/run/calico
name: var-run-calico
readOnly: false
- mountPath: /var/lib/calico
name: var-lib-calico
readOnly: false
- name: policysync
mountPath: /var/run/nodeagent
volumes:
# Used by calico-node.
- name: lib-modules
hostPath:
path: /lib/modules
- name: var-run-calico
hostPath:
path: /var/run/calico
- name: var-lib-calico
hostPath:
path: /var/lib/calico
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
# Used to install CNI.
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-net-dir
hostPath:
path: /etc/cni/net.d
# Mount in the directory for host-local IPAM allocations. This is
# used when upgrading from host-local to calico-ipam, and can be removed
# if not using the upgrade-ipam init container.
- name: host-local-net-dir
hostPath:
path: /var/lib/cni/networks
# Used to create per-pod Unix Domain Sockets
- name: policysync
hostPath:
type: DirectoryOrCreate
path: /var/run/nodeagent
# Used to install Flex Volume Driver
- name: flexvol-driver-host
hostPath:
type: DirectoryOrCreate
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-node
namespace: kube-system
---
# Source: calico/templates/calico-kube-controllers.yaml
# See https://github.com/projectcalico/kube-controllers
apiVersion: apps/v1
kind: Deployment
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
spec:
# The controllers can only have a single active instance.
replicas: 1
selector:
matchLabels:
k8s-app: calico-kube-controllers
strategy:
type: Recreate
template:
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
nodeSelector:
beta.kubernetes.io/os: linux
tolerations:
# Mark the pod as a critical add-on for rescheduling.
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/master
effect: NoSchedule
serviceAccountName: calico-kube-controllers
priorityClassName: system-cluster-critical
containers:
- name: calico-kube-controllers
image: calico/kube-controllers:v3.10.4
env:
# Choose which controllers to run.
- name: ENABLED_CONTROLLERS
value: node
- name: DATASTORE_TYPE
value: kubernetes
readinessProbe:
exec:
command:
- /usr/bin/check-status
- -r
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-kube-controllers
namespace: kube-system
---
# Source: calico/templates/calico-etcd-secrets.yaml
---
# Source: calico/templates/calico-typha.yaml
---
# Source: calico/templates/configure-canal.yaml