描述
使用到的设备:一台master服务器,N台worker服务器。
注意事项:权限使用root权限执行脚本
链接:http://docs.kubernetes.org.cn/
节点所需端口:
master节点 | worker节点 | ||
端口范围 | 用途 | 端口范围 | 用途 |
---|---|---|---|
6443 * | Kubernetes API server | 10250 | Kubelet API |
2379-2380 | etcd server client API | 10255 | Read-only Kubelet API (Heapster) |
10250 | Kubelet API | 30000-32767 | NodePort Services默认端口范围。 |
10251 | kube-scheduler | ||
10252 | kube-controller-manager | ||
10255 | Read-only Kubelet API (Heapster) |
使用
所有节点
vim /etc/profile
在末尾加上,例:export DEPLOY_DIR=/data export LAN_HOST=192.168.0.123
export DEPLOY_DIR={脚本的目录}
export LAN_HOST={该机器局域网的ip地址}
创建目录并进入该目录,将脚本复制到该目录下
mkdir -p ${DEPLOY_DIR}/deploy/k8s | cd ${DEPLOY_DIR}/deploy/k8s
master节点
kubernetes-master 是主机名,可自定义,master 是选项,说明脚本安装为master节点的安装。
chmod +x k8s_init.sh
./k8s_init.sh kubernetes-master master
因为需要下载内容较多,需要等待。
连接文件信息存放在 join_msg.txt,该信息需要在worker节点上使用。
worker节点
192.168.10.220 为master节点的ip地址, token 跟 discovery-token-ca-cert-hash 存在于 join_msg.txt中
chmod +x k8s_init.sh
./k8s_init.sh kubernetes-worker01 node
kubeadm join 192.168.10.220:6443 --token 8do0f7.9jykovpyj1y8es0h \
--discovery-token-ca-cert-hash sha256:5a66cb41ee691f8fe1917e7dda2c7bc24f1d87be22770df38428354a3ba5fb70
在master上查看节点信息
kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubernetes-master Ready master 25m v1.18.0
kubernetes-worker01 Ready <none> 2m13s v1.18.0
服务的STATUS需要都 Ready, worker主机可能需要等一会
注意事项
join_msg.txt中的令牌过期时间为1天,如果过期了,可以创建新的令牌,可以让新的节点加入。
kubeadm token create
#内容 8do0f7.9jykovpyj1y8es0h
discovery-token-ca-cert-hash 值获取 ca证书公钥的hash值
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
openssl dgst -sha256 -hex | sed 's/^.* //'
#内容 5a66cb41ee691f8fe1917e7dda2c7bc24f1d87be22770df38428354a3ba5fb70
// 最终结果
kubeadm join 192.168.10.220:6443 --token 8do0f7.9jykovpyj1y8es0h \
--discovery-token-ca-cert-hash sha256:5a66cb41ee691f8fe1917e7dda2c7bc24f1d87be22770df38428354a3ba5fb70
脚本内容
k8s_init.sh
#!/bin/bash
if [ -z "$2" ]; then
echo "请输入类别:master or node, bash.sh [hostname] [master]
例 master:k8s_init.sh kubernetes-master master
例 node01:k8s_init.sh kubernetes-node01 node
例 node02:k8s_init.sh kubernetes-node02 node
"
exit 1
fi
##############
##主节点##
##############
#### 第一部分,环境初始化 ####
#k8s版本
version=v1.18.0
kubelet=kubelet-1.18.0-0.x86_64
kubeadm=kubeadm-1.18.0-0.x86_64
kubectl=kubectl-1.18.0-0.x86_64
#集群加入方式
join_node_msg=${DEPLOY_DIR}/deploy/k8s/join_msg.txt
#部署flannel网络
flannel=${DEPLOY_DIR}/deploy/k8s/kube-flannel.yml
#### 第二部分,节点配置 ####
node_name=kubernetes-master
if [ -n "$1" ]; then
node_name=$1
fi
#修改hostname
hostnamectl set-hostname ${node_name}
#写入hosts
sudo cat >> /etc/hosts << EOF
${LAN_HOST} ${node_name}
EOF
#ssh-keygen
#ssh-copy-id -i $node01
#ssh-copy-id -i $node02
#ssh-copy-id -i $node03
#scp /etc/hosts node02:/etc/hosts
#scp /etc/hosts node03:/etc/hosts
#关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
#swap分区关闭
swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab
#关闭沙盒
setenforce 0
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/sysconfig/selinux
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config
#打开ipv6
modprobe br_netfilter
modprobe ip_vs_rr
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness = 0
EOF
sysctl -p /etc/sysctl.d/k8s.conf
ls /proc/sys/net/bridge
#打开ipv4转发
grep 'net.ipv4.ip_forward = 1' /etc/sysctl.conf || echo 'net.ipv4.ip_forward = 1' >> /etc/sysctl.conf
sysctl -p
#sed -i '/net.ipv4.ip_forward/ s/\(.*= \).*/\11/' /etc/sysctl.conf
#systemctl restart network
#### 第三部分,参数/源处理 ####
#安装epel源
#yum install -y epel-release
yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools wget vim ntpdate libseccomp libtool-ltdl
#时区校准
systemctl enable ntpdate.service
echo '*/30 * * * * /usr/sbin/ntpdate time7.aliyun.com >/dev/null 2>&1' > /tmp/crontab2.tmp
crontab /tmp/crontab2.tmp
systemctl start ntpdate.service
#添加kubernetes的epel源
sudo cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
#下载
sudo yum-config-manager \
--add-repo \
https://mirrors.ustc.edu.cn/docker-ce/linux/centos/docker-ce.repo
yum makecache fast
#### 第四部分,开始安装 ####
sudo yum install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
mkdir /etc/docker/
sudo cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": [
"https://dockerhub.azk8s.cn",
"https://docker.mirrors.ustc.edu.cn",
"http://hub-mirror.c.163.com"
],
"max-concurrent-downloads": 10,
"max-concurrent-uploads": 10,
"log-driver": "json-file",
"log-level": "warn",
"log-opts": {
"max-size": "10m",
"max-file": "3"
},
"data-root": "${DEPLOY_DIR}/docker"
}
EOF
systemctl daemon-reload
systemctl start docker.service && systemctl enable docker.service
yum install -y --enablerepo="kubernetes" $kubelet $kubeadm $kubectl
systemctl enable kubelet.service && systemctl start kubelet.service
#安装tab快捷键
#yum -y install bash-completion && source /usr/share/bash-completion/bash_completion && source <(kubectl completion bash) && echo "source <(kubectl completion bash)" >> ~/.bashrc
#创建集群
if [ "$2" == "master" ]; then
echo "等待 kubeadm init 中,时间较久,需要docker pull 镜像包,请耐心等待........"
kubeadm init --apiserver-advertise-address ${LAN_HOST} --image-repository registry.aliyuncs.com/google_containers --kubernetes-version $version --pod-network-cidr=10.244.0.0/16 >> $join_node_msg 2>&1
export KUBECONFIG=/etc/kubernetes/admin.conf
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f https://ghproxy.com/https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
echo `cat join_msg.txt |grep -C 1 "kubeadm join"`
echo '更多kubeadm init 日志存储在$join_node_msg文件下'
fi