使用kubeadm安装Kubernetes 1.9

之前kubernetes一直都是使用二进制的方式安装,从最开始的手动安装各个服务组件,到后来的ansible自动部署,二进制方式安装定制性空间比较大,能方便的构建高可用集群,kubeadm是官方出的最简单安装方法,暂时还没有提供高可用环境安装,且镜像都在谷歌,限于国内网络环境安装限制太多,所以一直都没有研究,今天终于手动尝试了一把,比较麻烦的过程就是最新rpm包的获取和谷歌镜像的获取,这里使用github+dockerhub和一台国外主机进行了一把繁琐的搭建过程,总结:初期准备麻烦,第一次安装完成后,之后构建确实还是非常简单快速的。


需要的rpm获取方式

使用境外服务器下载rpm包源代码,构建rpm包,源码地址为https://github.com/kubernetes/release(yum装的版本比较低,阿里云kubernetes源地址:https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64)

root@ip-172-31-21-37:~# git clone https://github.com/kubernetes/release.git
root@ip-172-31-21-37:~# cd release/
root@ip-172-31-21-37:~/release# ls
anago     build        changelog-update    debian  docs              gcb     Gopkg.lock  lib      prin           README-CI.md   README.md  rpm              toolbox
branchff  BUILD.bazel  code-of-conduct.md  defs    find_green_build  gcbmgr  Gopkg.toml  LICENSE  push-build.sh  README-gcb.md  relnotes   script-template  WORKSPACE
root@ip-172-31-21-37:~/release# cd rpm/
root@ip-172-31-21-37:~/release/rpm# ls
10-kubeadm-post-1.8.conf  10-kubeadm-pre-1.8.conf  docker-build.sh  Dockerfile  entry.sh  kubelet.service  kubelet.spec  output
root@ip-172-31-21-37:~/release/rpm# ./docker-build.sh 

构建完成,rpm包会生成在当前目的output目录

root@ip-172-31-21-37:~/release/rpm# cd output/
root@ip-172-31-21-37:~/release/rpm/output# ls
aarch64  armhfp  ppc64le  s390x  x86_64 

如果使用centos安装,直接打包x86_64,下载即可。

谷歌镜像获取方式

获取官方镜像,获取方式使用dockerhub的自动构建作为中转: 在github上创建一个仓库,存储dockerhub需要使用的Dockerfile,只需要指定FROM即可;

FROM  gcr.io/google_containers/etcd-amd64:3.1.10

镜像列表(镜像版本从kubeadm init 执行后/etc/kubernetes/manifests目录查看): 暂时没有kube-proxy的镜像,但是版本一般都是一致的,可以直接使用相同版本号。

gcr.io/google_containers/etcd-amd64:3.1.10
gcr.io/google_containers/kube-apiserver-amd64:v1.9.2
gcr.io/google_containers/kube-controller-manager-amd64:v1.9.2
gcr.io/google_containers/kube-scheduler-amd64:v1.9.2
gcr.io/google_containers/kube-proxy-amd64:v1.9.2

其他依赖镜像,版本号参考官方网站(https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/):

gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7
gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7 
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7 
gcr.io/google_containers/pause-amd64:3.0

dockerhub自动构建完成之后,就可配合加速器pull下来,重新打tag。

hub上的自动构建的镜像:

zhusl/etcd-amd64:3.1.10
zhusl/kube-apiserver-amd64:v1.9.2
zhusl/kube-controller-manager-amd64:v1.9.2
zhusl/kube-scheduler-amd64:v1.9.2
zhusl/kube-proxy-amd64:v1.9.2
zhusl/k8s-dns-sidecar-amd64:1.14.7
zhusl/k8s-dns-kube-dns-amd64:1.14.7 
zhusl/k8s-dns-dnsmasq-nanny-amd64:1.14.7 
zhusl/pause-amd64:3.0

重新本地tag后的镜像:

gcr.io/google_containers/etcd-amd64:3.1.10
gcr.io/google_containers/kube-apiserver-amd64:v1.9.2
gcr.io/google_containers/kube-controller-manager-amd64:v1.9.2
gcr.io/google_containers/kube-scheduler-amd64:v1.9.2
gcr.io/google_containers/kube-proxy-amd64:v1.9.2
gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7
gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7 
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7 
gcr.io/google_containers/pause-amd64:3.0

系统环境初始化

hostnamectl set-hostname kube.master
#停防火墙
systemctl stop firewalld
systemctl disable firewalld
systemctl disable firewalld
#关闭Swap
swapoff -a 
sed 's/.*swap.*/#&/' /etc/fstab
#关闭防火墙
systemctl disable firewalld && systemctl stop firewalld && systemctl status firewalld
#关闭Selinux
setenforce  0 
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux 
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config 
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/sysconfig/selinux 
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config 

getenforce
#增加DNS
echo nameserver 114.114.114.114>>/etc/resolv.conf
#设置内核
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl -p /etc/sysctl.conf

#若问题
执行sysctl -p 时出现:
sysctl -p
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory

#解决方法:
modprobe br_netfilter
ls /proc/sys/net/bridge

安装开始下载下来的rpm包:

[root@kube ~]# cd x86_64
[root@kube x86_64]# ls
kubeadm-1.9.0-0.x86_64.rpm  kubectl-1.9.0-0.x86_64.rpm  kubelet-1.9.0-0.x86_64.rpm  kubernetes-cni-0.6.0-0.x86_64.rpm  repodata
[root@kube x86_64]# rpm -ivh *.rpm

另外还需要手动安装docker,我安装的是17.12.0-ce,默认使用的Cgroup Driver为cgroupfs,而kubelet默认使用的systemd,需要修改kubelet的配置参数:

[root@kube ~]# cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf 
......
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
......
ExecStart=/usr/bin/kubelet         

开始安装

执行kubeadm init –pod-network-cidr 172.23.0.0/16 > 如报连接失败,可以配置http代理,我用的自建的sockt5,docker服务也要加,再排除本地ip不使用代理,总之很繁琐;–pod-network-cidr是必须指定的,否则安装完成之后,网络插件fannel启动会报错‘node “kube.master” pod cidr not assigned’

后记:开始安装是只提前准备了四个服务镜像,应该是没有把镜像准备全才需要配置代理,因为安装过程中流量主要还是走谷歌的镜像地址。

附 docker服务添加代理方式:

[root@kube ~]# cat /etc/systemd/system/docker.service 
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.io

[Service]
Environment="PATH=/root/local/bin:/bin:/sbin:/usr/bin:/usr/sbin"
Environment=HTTP_PROXY=http://172.31.8.194:1080/
Environment=HTTPS_PROXY=http://172.31.8.194:1080/
ExecStart=/root/local/bin/dockerd --log-level=error 
ExecStartPost=/sbin/iptables -I FORWARD -s 0.0.0.0/0 -j ACCEPT
ExecReload=/bin/kill -s HUP $MAINPID
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Delegate=yes
KillMode=process

[Install]
WantedBy=multi-user.target


安装成功界面为:

  [root@kube ~]# kubeadm  init --pod-network-cidr 172.23.0.0/16
  [init] Using Kubernetes version: v1.9.2
  [init] Using Authorization modes: [Node RBAC]
  [preflight] Running pre-flight checks.
     [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.12.0-ce. Max validated version: 17.03
     [WARNING FileExisting-crictl]: crictl not found in system path
     [WARNING HTTPProxyCIDR]: connection to "10.96.0.0/12" uses proxy "http://172.31.8.194:1080/". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
     [WARNING HTTPProxyCIDR]: connection to "172.23.0.0/16" uses proxy "http://172.31.8.194:1080/". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
  [preflight] Starting the kubelet service
  [certificates] Generated ca certificate and key.
  [certificates] Generated apiserver certificate and key.
  [certificates] apiserver serving cert is signed for DNS names [kube.master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.90.24.159]
  [certificates] Generated apiserver-kubelet-client certificate and key.
  [certificates] Generated sa key and public key.
  [certificates] Generated front-proxy-ca certificate and key.
  [certificates] Generated front-proxy-client certificate and key.
  [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
  [kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
  [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
  [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
  [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
  [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
  [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
  [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
  [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
  [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
  [init] This might take a minute or longer if the control plane images have to be pulled.
  [apiclient] All control plane components are healthy after 32.502960 seconds
  [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  [markmaster] Will mark node kube.master as master by adding a label and a taint
  [markmaster] Master kube.master tainted and labelled with key/value: node-role.kubernetes.io/master=""
  [bootstraptoken] Using token: 77a732.41806f2bfca667e9
  [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
  [addons] Applied essential addon: kube-dns
  [addons] Applied essential addon: kube-proxy
  
  Your Kubernetes master has initialized successfully!
  
  To start using your cluster, you need to run the following as a regular user:
  
    mkdir -p /.kube
    sudo cp -i /etc/kubernetes/admin.conf /.kube/config
    sudo chown $(id -u):$(id -g) /.kube/config
  
  You should now deploy a pod network to the cluster.
  Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    https://kubernetes.io/docs/concepts/cluster-administration/addons/
  
  You can now join any number of machines by running the following on each node
  as root:
  
    kubeadm join --token 77a732.41806f2bfca667e9 10.90.24.159:6443 --discovery-token-ca-cert-hash sha256:87e3ce011aa73f6e38d81a512b5f0e46847b2a7d6f6bfda9403af513608fa2e9

按提示创建kubectl配置

mkdir -p /.kube
sudo cp -i /etc/kubernetes/admin.conf /.kube/config
sudo chown $(id -u):$(id -g) /.kube/config

安装网络插件

kubeadm init默认是不安装网络插件的,需要自己手动安装,比较熟悉的有fannel,calico,这里使用flannel:

kubectl create -f   https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml

安装dashboard

之前用二进制方式安装过好多次了,这里直接使用的之前的yaml文件,也可以使用官网的,只是镜像要和之前的方式一样通过dockerhub进行中转。

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

最终成果

[root@kube ~]# kubectl  get pods --all-namespaces
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE
kube-system   etcd-kube.master                        1/1       Running   2          1h
kube-system   kube-apiserver-kube.master              1/1       Running   1          1h
kube-system   kube-controller-manager-kube.master     1/1       Running   0          1h
kube-system   kube-dns-6f4fd4bdf-9j8j6                3/3       Running   3          1h
kube-system   kube-flannel-ds-pn8jl                   1/1       Running   1          1h
kube-system   kube-proxy-rtgtv                        1/1       Running   2          1h
kube-system   kube-scheduler-kube.master              1/1       Running   0          1h
kube-system   kubernetes-dashboard-5b9649685d-n2k9k   1/1       Running   2          1h

本文链接:参与评论 »

--EOF--

提醒:本文最后更新于 2478 天前,文中所描述的信息可能已发生改变,请谨慎使用。

专题「KUBERNETES」的其它文章 »

Comments