基本操作:
删除集群信息:
kubeadm reset
查看集群token:
kubeadm token list
查看节点状态:
kubectl get nodes
查看pod状态:
kubectl get pod --all-namespaces
查看pod的具体情况:
kubectl describe pod coredns-fb8b8dccf-kp9cl --namespace=kube-system
初始化集群
kubeadm init systemctl start kubelet
[root@node1 ~]# kubeadm init --apiserver-advertise-address 172.17.110.73 --pod-network-cidr=172.18.0.0/16
I0612 14:10:38.729699 27845 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) I0612 14:10:38.729848 27845 version.go:97] falling back to the local client version: v1.14.3 [init] Using Kubernetes version: v1.14.3 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [node1 localhost] and IPs [172.17.110.73 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [node1 localhost] and IPs [172.17.110.73 127.0.0.1 ::1] [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.17.110.73] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 15.003229 seconds [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --experimental-upload-certs [mark-control-plane] Marking the node node1 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node node1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: lbfepc.95kw196hfz4dvd4z [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 172.17.110.73:6443 --token lbfepc.95kw196hfz4dvd4z \ --discovery-token-ca-cert-hash sha256:540630165e5333ee0c67a7f5f43591c67f52a38631e9b8f342e196d052023a29 [root@node1 ~]#
配置kubectl
[root@node1 ~]# mkdir .kube [root@node1 ~]# cp /etc/kubernetes/admin.conf ~/.kube/config [root@node1 ~]$ kubectl completion bash > .kubectlrc [root@node1 ~]$ echo "source ~/.kubectlrc" >> .bashrc
[root@node1 ~]# useradd zky [root@node1 ~]# mkdir -p /home/.kube [root@node1 ~]# cp /etc/kubernetes/admin.conf ~/.kube/config [root@node1 ~]# chown zky:zky /home/.kube/config [root@node1 ~]# su - zky [zky@node1 ~]$ kubectl completion bash > .kubectlrc [zky@node1 ~]$ echo "source ~/.kubectlrc" >> .bashrc
[zky@node1 ~]$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.extensions/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds-amd64 created daemonset.extensions/kube-flannel-ds-arm64 created daemonset.extensions/kube-flannel-ds-arm created daemonset.extensions/kube-flannel-ds-ppc64le created daemonset.extensions/kube-flannel-ds-s390x created
[zky@node1 ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION node1 Ready master 5m14s v1.14.3
[zky@node1 ~]$ kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-fb8b8dccf-jdvwq 1/1 Running 0 6m24s kube-system coredns-fb8b8dccf-vk99q 1/1 Running 0 6m24s kube-system etcd-node1 1/1 Running 0 5m27s kube-system kube-apiserver-node1 1/1 Running 0 5m31s kube-system kube-controller-manager-node1 1/1 Running 0 5m44s kube-system kube-flannel-ds-amd64-q4vn5 1/1 Running 0 2m24s kube-system kube-proxy-cnfcj 1/1 Running 0 6m24s kube-system kube-scheduler-node1 1/1 Running 0 5m46s
[zky@node1 ~]$ kubectl get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-fb8b8dccf-jdvwq 1/1 Running 0 4m56s 172.18.0.2 node1 <none> <none> kube-system coredns-fb8b8dccf-vk99q 1/1 Running 0 4m56s 172.18.0.4 node1 <none> <none> kube-system etcd-node1 1/1 Running 0 3m59s 172.17.110.73 node1 <none> <none> kube-system kube-apiserver-node1 1/1 Running 0 4m3s 172.17.110.73 node1 <none> <none> kube-system kube-controller-manager-node1 1/1 Running 0 4m16s 172.17.110.73 node1 <none> <none> kube-system kube-flannel-ds-amd64-q4vn5 1/1 Running 0 56s 172.17.110.73 node1 <none> <none> kube-system kube-proxy-cnfcj 1/1 Running 0 4m56s 172.17.110.73 node1 <none> <none> kube-system kube-scheduler-node1 1/1 Running 0 4m18s 172.17.110.73 node1 <none> <none> [zky@node1 ~]$
salve节点导入镜像
docker load < coredns-1.3.1.tar docker load < etcd-3.3.10.tar docker load < kube-apiserver-1.14.3.tar docker load < kube-controller-manager-1.14.3.tar docker load < kube-proxy-1.14.3.tar docker load < kube-scheduler-1.14.3.tar docker load < pause-3.1.tar
salve节点加入k8s集群
复制配置信息到salve节点:
rsync -av --progress /home/zky/.kube zky@node2:~/ --token --discovery-token-ca-cert-hash
为master节点 kubeadm init 初始化后的信息
[root@node2 ~]# kubeadm join 172.17.110.73:6443 --token lbfepc.95kw196hfz4dvd4z \ > --discovery-token-ca-cert-hash sha256:540630165e5333ee0c67a7f5f43591c67f52a38631e9b8f342e196d052023a29
[preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. [root@node2 ~]#
[root@node2 ~]# su - zky
Last login: Wed Jun 12 14:22:32 CST 2019 on pts/0
[zky@node2 ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION node1 Ready master 16m v1.14.3 node2 Ready <none> 39s v1.14.3
[zky@node2 ~]$ kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-fb8b8dccf-jdvwq 1/1 Running 0 16m kube-system coredns-fb8b8dccf-vk99q 1/1 Running 0 16m kube-system etcd-node1 1/1 Running 0 15m kube-system kube-apiserver-node1 1/1 Running 0 15m kube-system kube-controller-manager-node1 1/1 Running 0 16m kube-system kube-flannel-ds-amd64-lsxqz 1/1 Running 0 51s kube-system kube-flannel-ds-amd64-q4vn5 1/1 Running 0 12m kube-system kube-proxy-bn4sx 1/1 Running 0 51s kube-system kube-proxy-cnfcj 1/1 Running 0 16m kube-system kube-scheduler-node1 1/1 Running 0 16m
[zky@node2 ~]$ kubectl get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-fb8b8dccf-jdvwq 1/1 Running 0 16m 172.18.0.2 node1 <none> <none> kube-system coredns-fb8b8dccf-vk99q 1/1 Running 0 16m 172.18.0.4 node1 <none> <none> kube-system etcd-node1 1/1 Running 0 15m 172.17.110.73 node1 <none> <none> kube-system kube-apiserver-node1 1/1 Running 0 16m 172.17.110.73 node1 <none> <none> kube-system kube-controller-manager-node1 1/1 Running 0 16m 172.17.110.73 node1 <none> <none> kube-system kube-flannel-ds-amd64-lsxqz 1/1 Running 0 56s 172.17.110.75 node2 <none> <none> kube-system kube-flannel-ds-amd64-q4vn5 1/1 Running 0 12m 172.17.110.73 node1 <none> <none> kube-system kube-proxy-bn4sx 1/1 Running 0 56s 172.17.110.75 node2 <none> <none> kube-system kube-proxy-cnfcj 1/1 Running 0 16m 172.17.110.73 node1 <none> <none> kube-system kube-scheduler-node1 1/1 Running 0 16m 172.17.110.73 node1 <none> <none> [zky@node2 ~]$
查看kubelet服务日志
[root@node1 ~]# journalctl -f -u kubelet
-- Logs begin at Wed 2019-06-12 11:47:24 CST. -- Jun 12 13:25:37 node1 kubelet[22239]: E0612 13:25:37.789564 22239 remote_runtime.go:109] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to set up sandbox container "4acac8a50e5fb35f50ea0df798f2a288623f98c9b367c826cba3d88b88389b87" network for pod "coredns-fb8b8dccf-z9w4s": NetworkPlugin cni failed to set up pod "coredns-fb8b8dccf-z9w4s_kube-system" network: open /run/flannel/subnet.env: no such file or directory
kubeadm生成的token过期后集群增加节点
参考:https://blog.csdn.net/mailjoin/article/details/79686934
默认token的有效期为24小时,当过期之后,该token就不可用了。解决方法如下
1、重新生成新的token,获取ca证书sha256编码hash值
[root@walker-1 kubernetes]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt \ | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex \ | sed 's/^.* //'
0fd95a9bc67a7bf0ef42da968a0d55d92e52898ec37c971bd77ee501d845b538 [root@walker-1 kubernetes]#
2、加入k8s集群
root@walker-4 kubernetes]# kubeadm join 172.16.6.79:6443 --token aa78f6.8b4cafc8ed26c34f \ --discovery-token-ca-cert-hash sha256:0fd95a9bc67a7bf0ef42da968a0d55d92e52898ec37c971bd77ee501d845b538 \ --skip-preflight-checks
网友留言: