Kubernetes指南: 从零开始搭建集群 在云计算时代,容器化已经成为了一种广泛使用的技术,Kubernetes是其中的佼佼者,其作为一个容器编排平台,在管理上的优势及其高可用性、易扩展性深受企业和开发者的喜爱。在本文章中,我们将从零开始搭建Kubernetes集群,并解释其中的技术知识点。 1. 环境准备 在开始之前,需要先确定好环境,Kubernetes分为Master节点和Worker节点,其中Master节点需要至少2个,Worker节点至少1个,硬件环境和软件环境需要满足一定的要求。硬件环境建议: - Master节点:CPU 2核,内存 2GB,硬盘 40GB以上 - Worker节点:CPU 4核,内存 4GB,硬盘 40GB以上 软件环境建议: - 操作系统:CentOS7.4以上 - Docker:18.09.0以上 - Kubeadm:1.20.1及以上 2. 安装Docker 在CentOS系统下,我们可以执行下方脚本安装Docker: ```bash $ sudo yum install -y yum-utils device-mapper-persistent-data lvm2 $ sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo $ sudo yum install -y docker-ce-18.09.0 docker-ce-cli-18.09.0 containerd.io $ sudo systemctl start docker $ sudo systemctl enable docker ``` 3. 安装Kubeadm 我们可以执行如下脚本安装Kubeadm: ```bash $ sudo vi /etc/sysctl.d/k8s.conf # 写入如下内容 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 $ sudo sysctl --system $ sudo setenforce 0 $ sudo sed -i 's/SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config $ sudo vi /etc/yum.repos.d/kubernetes.repo # 写入如下内容 [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg $ sudo yum install -y kubelet-1.20.1 kubeadm-1.20.1 kubectl-1.20.1 --disableexcludes=kubernetes $ sudo systemctl enable kubelet $ sudo systemctl start kubelet ``` 4. 搭建Master节点 在搭建Master节点时,我们需要先确定好IP地址,并编辑好如下配置文件: ```yaml apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: stable apiServer: certSANs: - "{{ Master节点IP地址 }}" controlPlaneEndpoint: "{{ Master节点IP地址 }}:6443" extraArgs: audit-log-path: /var/log/kubernetes/audit.log audit-log-maxage: "30" audit-log-maxbackup: "3" audit-log-maxsize: "100" authorization-mode: Node,RBAC enable-admission-plugins: NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota runtime-config: api/all=true service-node-port-range: 30000-32767 controllerManager: extraArgs: node-cidr-mask-size: "24" node-monitor-grace-period: "30s" pod-eviction-timeout: "2m" use-service-account-credentials: "true" networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: "10.96.0.0/12" ``` 然后,我们在Master节点上执行如下脚本: ```bash $ sudo kubeadm init --config kubeadm-config.yaml ``` 在执行完成后,我们可以看到如下输出: ```bash [init] Using Kubernetes version: v1.20.0 [preflight] Running pre-flight checks [WARNING FileExisting-crictl]: crictl not found in system path [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [master-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.120] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost] and IPs [192.168.1.120 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost] and IPs [192.168.1.120 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [apiclient] All control plane components are healthy after 80.001069 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster [kubelet-start] Restarting the kubelet to use the new kubelet configuration [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [csr-approver] CSR approval is disabled. Nodes that match this CSR will be not be permitted to join the cluster. Passing --apiserver-advertise-address=0.0.0.0 may allow unauthorized nodes to join the cluster. [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join {{ Master节点IP地址 }}:6443 --token {{ Token }} \ --discovery-token-ca-cert-hash sha256:{{ CA证书哈希值 }} ``` 在执行完成后,我们需要执行如下脚本,让普通用户也拥有使用Kubernetes的权限: ```bash $ mkdir -p $HOME/.kube $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config ``` 5. 搭建Worker节点 在搭建Worker节点时,我们需要使用之前Master节点输出的join命令,并将其执行在Worker节点上,如下: ```bash $ sudo kubeadm join {{ Master节点IP地址 }}:6443 --token {{ Token }} \ --discovery-token-ca-cert-hash sha256:{{ CA证书哈希值 }} ``` 执行成功后,在Master节点上执行如下脚本: ```bash $ kubectl get nodes ``` 如果输出了刚刚加入的Worker节点信息,说明Worker节点已经成功加入到了集群中。 至此,我们已经成功搭建了一个Kubernetes集群,并且已经加入了一个Worker节点。在实际生产环境中,我们还需要进行更多的配置和优化,例如网络、存储、高可用性等方面,这将在后续的文章中进行探讨。