精品国产人成在线_亚洲高清无码在线观看_国产在线视频国产永久2021_国产AV综合第一页一个的一区免费影院黑人_最近中文字幕MV高清在线视频

0
  • 聊天消息
  • 系統消息
  • 評論與回復
登錄后你可以
  • 下載海量資料
  • 學習在線課程
  • 觀看技術視頻
  • 寫文章/發帖/加入社區
會員中心
創作中心

完善資料讓更多小伙伴認識你,還能領取20積分哦,立即完善>

3天內不再提示

Kubernetes的集群部署

汽車電子技術 ? 來源:碼農與軟件時代 ? 作者: 碼農與軟件時代 ? 2023-02-15 10:35 ? 次閱讀

一、集群部署簡介

1、Kubeadm

Kubeadm是一種Kubernetes集群部署工具,通過kubeadm init命令創建master節點,通過 kubeadm join命令把node節點加入到集群中。

kubeadm init大約會做這些事情:

①預檢測:檢查系統狀態(Linux cgroups、10250/10251/10252端口是否可用 ),給出警告、錯誤信息,并會退出 kubeadm init命令執行;

②生成證書:生成的證書放在/etc/kubernetes/pki目錄,以便訪問kubernetes時使用;

③生成各組件YAML文件;

④安裝集群最小可用插件。

其他Node節點使用kubeadm init生成的token,執行kubeadm join命令,就可以加入集群了。Node節點要先安裝kubelet、kubeadm。

有關kubeadm init和kubeadm join命令的解釋,參考:

https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/
https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/kubeadm-join

2、Kubelet

kubelet是用來操作Pod和容器的組件,運行在集群的所有節點上,需要直接安裝在宿主機上。安裝過程中,kubeadm調用kubelet實現kubeadm init的工作。

3、Kubectl

kubectl是Kubernetes集群的命令行工具,通過kubectl能夠對集群本身進行管理,并能夠在集群上進行容器化應用的安裝部署。

二、集群安裝

1、環境準備

Ubuntu 18.04 LTS CPU 2核 內存4G 硬盤 20G

同樣規格:3臺設備,hostname分別為master、node1、node2

2、Master節點安裝

(1)設置hostname

root@k8s:/# hostnamectl --static set-hostname master
root@k8s:/# hostnamectl
   Static hostname: master
Transient hostname: k8s
         Icon name: computer-vm
           Chassis: vm
        Machine ID: e5c0d0f18ba04c0a8722ab9fff662987
           Boot ID: 74af5268dfe74f23b3dee608ab2afe41
    Virtualization: kvm
  Operating System: Ubuntu 18.04.2 LTS
            Kernel: Linux 4.15.0-122-generic
      Architecture: x86-64

(2)關閉系統Swap:執行命令swapoff-a 。

(3)安裝docker社區版

apt-get update
apt-get -y install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
apt-get -y update
apt-get -y install docker-ce

(4)安裝 kubelet 、kubeadm 、kubectl工具

apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
cat </etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl

3、Node1節點安裝

重復Master節點安裝的(1)~(4)

4、創建集群

(1)Master節點執行kubeadm init指令

kubeadm init --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16

成功日志的信息如下:

root@k8s:~# kubeadm init --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.23.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 7.503605 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: e16km1.69phwhcdjaulf060
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!


To start using your cluster, you need to run the following as a regular user:


  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config


Alternatively, if you are the root user, you can run:


  export KUBECONFIG=/etc/kubernetes/admin.conf


You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/


Then you can join any number of worker nodes by running the following on each as root:


kubeadm join 30.0.1.180:6443 --token e16km1.69phwhcdjaulf060 \\
  --discovery-token-ca-cert-hash sha256:2d3f77ae7598fb7709655b381af5fda8896d5a97cdf7176ff74a2aa25fca271c

kubeadm init 結束時,加入集群的命令信息會被打印出來,直接拷貝到需要加入的節點執行就可以了。

上面是比較順利的步驟,實際操作過程中可能會遇到的如下問題:

Unfortunately, an error has occurred:
      timed out waiting for the condition


    This error is likely caused by:
      - The kubelet is not running
      - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)


  If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'


  Additionally, a control plane component may have crashed or exited when started by the container runtime.
  To troubleshoot, list all containers using your preferred container runtimes CLI.


  Here is one example how you may list all Kubernetes containers running in docker:
    - 'docker ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'docker logs CONTAINERID'

原因為:

Cgroup Driver的值不同,docker的Cgroup Driver: cgroupfs,而kubernetes為systemd。

root@k8s:/# docker info
Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Docker Buildx (Docker Inc., v0.7.1-docker)
  scan: Docker Scan (Docker Inc., v0.12.0)


Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 7
 Server Version: 20.10.12
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 7b11cfaabd73bb80907dd23182b9347b4245eb5d
 runc version: v1.0.2-0-g52b36a2
 init version: de40ad0
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 4.15.0-122-generic
 Operating System: Ubuntu 18.04.2 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 2
 Total Memory: 3.852GiB
 Name: master
 ID: TNT3:2XIE:OCQQ:EZXX:DOVR:HJ7G:ASDH:XYFE:VZSO:YA5R:O2TU:IUVO
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

修改的方法為:在/etc/docker/daemon.json文件中(沒文件則創建)增加如下內容:

{
  "exec-opts": ["native.cgroupdriver=systemd"]
}

重啟docker

systemctl restart docker

再次執行kube-init,報錯:

[init] Using Kubernetes version: v1.23.1
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
  [ERROR Port-6443]: Port 6443 is in use
  [ERROR Port-10259]: Port 10259 is in use
  [ERROR Port-10257]: Port 10257 is in use
  [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
  [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
  [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
  [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
  [ERROR Port-10250]: Port 10250 is in use
  [ERROR Port-2379]: Port 2379 is in use
  [ERROR Port-2380]: Port 2380 is in use
  [ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

修改方法為:

root@k8s:~# netstat -nplt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.1:10257         0.0.0.0:*               LISTEN      3289/kube-controlle 
tcp        0      0 0.0.0.0:6001            0.0.0.0:*               LISTEN      998/Xtightvnc       
tcp        0      0 127.0.0.1:10259         0.0.0.0:*               LISTEN      3311/kube-scheduler 
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      601/systemd-resolve 
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      948/sshd            
tcp        0      0 0.0.0.0:39451           0.0.0.0:*               LISTEN      1052/xfce4-session  
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      2593/kubelet        
tcp        0      0 30.0.1.180:2379         0.0.0.0:*               LISTEN      3300/etcd           
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      3300/etcd           
tcp        0      0 30.0.1.180:2380         0.0.0.0:*               LISTEN      3300/etcd           
tcp        0      0 127.0.0.1:2381          0.0.0.0:*               LISTEN      3300/etcd           
tcp        0      0 0.0.0.0:5901            0.0.0.0:*               LISTEN      998/Xtightvnc       
tcp        0      0 127.0.0.1:36431         0.0.0.0:*               LISTEN      2593/kubelet        
tcp6       0      0 :::21                   :::*                    LISTEN      701/vsftpd          
tcp6       0      0 :::22                   :::*                    LISTEN      948/sshd            
tcp6       0      0 :::10250                :::*                    LISTEN      2593/kubelet        
tcp6       0      0 :::6443                 :::*                    LISTEN      3349/kube-apiserver 
tcp6       0      0 :::35407                :::*                    LISTEN      1052/xfce4-session  
root@k8s:~# kill -9 3349 3311 3289 2593 3300 
root@k8s:~# netstat -nplt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:6001            0.0.0.0:*               LISTEN      998/Xtightvnc       
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      601/systemd-resolve 
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      948/sshd            
tcp        0      0 0.0.0.0:39451           0.0.0.0:*               LISTEN      1052/xfce4-session  
tcp        0      0 0.0.0.0:5901            0.0.0.0:*               LISTEN      998/Xtightvnc       
tcp6       0      0 :::21                   :::*                    LISTEN      701/vsftpd          
tcp6       0      0 :::22                   :::*                    LISTEN      948/sshd            
tcp6       0      0 :::35407                :::*                    LISTEN      1052/xfce4-session

修訂后,kubeadm init執行成功。

# 此時Master節點的狀態為NotReady。
root@k8s:~# kubectl get nodes
NAME     STATUS     ROLES                  AGE     VERSION
master   NotReady   control-plane,master   6m13s   v1.23.1


# 拉取鏡像的信息
root@k8s:~# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.23.1
k8s.gcr.io/kube-controller-manager:v1.23.1
k8s.gcr.io/kube-scheduler:v1.23.1
k8s.gcr.io/kube-proxy:v1.23.1
k8s.gcr.io/pause:3.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/coredns/coredns:v1.8.6


# coredns 處于Pending狀態,原因為網絡插件這未安裝
root@k8s:~# kubectl get pods --all-namespaces
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE
kube-system   coredns-6d8c4cb4d-ghvmj          0/1     Pending   0          14m
kube-system   coredns-6d8c4cb4d-p45mv          0/1     Pending   0          14m
kube-system   etcd-master                      1/1     Running   0          15m
kube-system   kube-apiserver-master            1/1     Running   0          15m
kube-system   kube-controller-manager-master   1/1     Running   0          15m
kube-system   kube-proxy-xswwz                 1/1     Running   0          14m
kube-system   kube-scheduler-master            1/1     Running   0          15m


root@k8s:~# journalctl -f -u kubelet.service
-- Logs begin at Sun 2019-05-05 16:25:08 CST. --
Dec 30 20:44:59 master kubelet[6501]: E1230 16:44:59.669150    6501 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"

**(2)Master節點安裝flannel網絡插件 **

執行指令安裝flannel插件:

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f  kube-flannel.yml

安裝flannel后,master節點處于ready狀態。

flannel安裝:
root@k8s:~# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
--2021-12-30 20:47:48--  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.111.133, 185.199.108.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 5177 (5.1K) [text/plain]
Saving to: akube-flannel.ymla


kube-flannel.yml                       100%[===========================================================================>]   5.06K  --.-KB/s    in 0s      


2021-12-30 20:47:49 (21.4 MB/s) - akube-flannel.ymla saved [5177/5177]


root@k8s:~# kubectl apply -f  kube-flannel.yml
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created


root@k8s:~# kubectl get pods --all-namespaces
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE
kube-system   coredns-6d8c4cb4d-ghvmj          1/1     Running   0          25m
kube-system   coredns-6d8c4cb4d-p45mv          1/1     Running   0          25m
kube-system   etcd-master                      1/1     Running   0          25m
kube-system   kube-apiserver-master            1/1     Running   0          25m
kube-system   kube-controller-manager-master   1/1     Running   0          25m
kube-system   kube-flannel-ds-ql282            1/1     Running   0          66s
kube-system   kube-proxy-xswwz                 1/1     Running   0          25m
kube-system   kube-scheduler-master            1/1     Running   0          25m


root@k8s:~# kubectl get nodes
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   26m   v1.23.1

(3)Node節點加入到集群中

執行指令,將node1加入集群

kubeadm join 30.0.1.180:6443 --token e16km1.69phwhcdjaulf060 --discovery-token-ca-cert-hash sha256:2d3f77ae7598fb7709655b381af5fda8896d5a97cdf7176ff74a2aa25fca271c

這個命令在Master節點初始化成功的日志中,也可以在Master節點執行命令獲?。?/p>

kubeadm token create --print-join-command

成功的日志信息:

root@k8s:~# kubeadm join 30.0.1.180:6443 --token e16km1.69phwhcdjaulf060 --discovery-token-ca-cert-hash sha256:2d3f77ae7598fb7709655b381af5fda8896d5a97cdf7176ff74a2aa25fca271c
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1230 20:19:34.532570   26262 utils.go:69] The recommended value for "resolvConf" in "KubeletConfiguration" is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...


This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.


Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

此時的節點信息為:

root@k8s:~# kubectl get nodes
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   61m   v1.23.1
node1    Ready                     13m   v1.23.1


root@k8s:~# kubectl label nodes node1 node-role.kubernetes.io/node=
node/node1 labeled
root@k8s:~# kubectl get nodes
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   67m   v1.23.1
node1    Ready    node                   18m   v1.23.1

可能遇到的問題:Master節點的問題1,依然會遇見,采用相同的解決方法。

(3)Node2執行與node1相同的操作

root@k8s:/# kubeadm join 30.0.1.180:6443 --token e16km1.69phwhcdjaulf060 --discovery-token-ca-cert-hash sha256:2d3f77ae7598fb7709655b381af5fda8896d5a97cdf7176ff74a2aa25fca271c 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1230 23:22:10.274581   28114 utils.go:69] The recommended value for "resolvConf" in "KubeletConfiguration" is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...


This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.


Run 'kubectl get nodes' on the control-plane to see this node join the cluster.


root@k8s:~# kubectl get nodes
NAME     STATUS   ROLES                  AGE     VERSION
master   Ready    control-plane,master   7h2m    v1.23.1
node1    Ready    node                   6h13m   v1.23.1
node2    Ready    node                   3m13s   v1.23.1

三、集群相關信息

1、kubernetes組件部署信息

# kubernetes組件基本上運行在POD中
root@k8s:~# kubectl get pods -o wide --all-namespaces
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
kube-system   coredns-6d8c4cb4d-ghvmj          1/1     Running   0          17h   10.244.0.2   master   <none>           <none>
kube-system   coredns-6d8c4cb4d-p45mv          1/1     Running   0          17h   10.244.0.3   master   <none>           <none>
kube-system   etcd-master                      1/1     Running   0          17h   30.0.1.180   master   <none>           <none>
kube-system   kube-apiserver-master            1/1     Running   0          17h   30.0.1.180   master   <none>           <none>
kube-system   kube-controller-manager-master   1/1     Running   0          17h   30.0.1.180   master   <none>           <none>
kube-system   kube-flannel-ds-8qt6p            1/1     Running   0          16h   30.0.1.160   node1    <none>           <none>
kube-system   kube-flannel-ds-ql282            1/1     Running   0          17h   30.0.1.180   master   <none>           <none>
kube-system   kube-flannel-ds-zkt47            1/1     Running   0          10h   30.0.1.47    node2    <none>           <none>
kube-system   kube-proxy-pb9gn                 1/1     Running   0          10h   30.0.1.47    node2    <none>           <none>
kube-system   kube-proxy-xswwz                 1/1     Running   0          17h   30.0.1.180   master   <none>           <none>
kube-system   kube-proxy-zdfp5                 1/1     Running   0          16h   30.0.1.160   node1    <none>           <none>
kube-system   kube-scheduler-master            1/1     Running   0          17h   30.0.1.180   master   <none>           <none>


# kublet直接安裝在宿主機上,不以docker形式運行
root@k8s:~# systemctl status kubelet.service
a kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           aa10-kubeadm.conf
   Active: active (running) since Thu 2021-12-30 16:23:24 CST; 17h ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 6501 (kubelet)
    Tasks: 16 (limit: 4702)
   CGroup: /system.slice/kubelet.service
           aa6501 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/li


root@k8s:~# docker ps
CONTAINER ID   IMAGE                                               COMMAND                  CREATED        STATUS        PORTS     NAMES
28bfaeeaadf1   a4ca41631cc7                                        "/coredns -conf /etca|"   17 hours ago   Up 17 hours             k8s_coredns_coredns-6d8c4cb4d-p45mv_kube-system_4ce03d3c-1660-4975-8450-408515ec6a02_0
57a535a41123   a4ca41631cc7                                        "/coredns -conf /etca|"   17 hours ago   Up 17 hours             k8s_coredns_coredns-6d8c4cb4d-ghvmj_kube-system_1a67722e-a15f-4bf0-bbd7-e2af542d2621_0
7be45271357a   registry.aliyuncs.com/google_containers/pause:3.6   "/pause"                 17 hours ago   Up 17 hours             k8s_POD_coredns-6d8c4cb4d-p45mv_kube-system_4ce03d3c-1660-4975-8450-408515ec6a02_0
79776dc797f4   registry.aliyuncs.com/google_containers/pause:3.6   "/pause"                 17 hours ago   Up 17 hours             k8s_POD_coredns-6d8c4cb4d-ghvmj_kube-system_1a67722e-a15f-4bf0-bbd7-e2af542d2621_0
424b5047009f   e6ea68648f0c                                        "/opt/bin/flanneld -a|"   17 hours ago   Up 17 hours             k8s_kube-flannel_kube-flannel-ds-ql282_kube-system_9cb2439b-e8f4-422f-a72d-83370e75043e_0
51bea3cfeef7   registry.aliyuncs.com/google_containers/pause:3.6   "/pause"                 17 hours ago   Up 17 hours             k8s_POD_kube-flannel-ds-ql282_kube-system_9cb2439b-e8f4-422f-a72d-83370e75043e_0
e6149ade3a29   b46c42588d51                                        "/usr/local/bin/kubea|"   18 hours ago   Up 18 hours             k8s_kube-proxy_kube-proxy-xswwz_kube-system_12dac07f-e07e-4eff-becc-7b40a92f3adb_0
3c365b2342a0   registry.aliyuncs.com/google_containers/pause:3.6   "/pause"                 18 hours ago   Up 18 hours             k8s_POD_kube-proxy-xswwz_kube-system_12dac07f-e07e-4eff-becc-7b40a92f3adb_0
b60a3b02f427   25f8c7f3da61                                        "etcd --advertise-cla|"   18 hours ago   Up 18 hours             k8s_etcd_etcd-master_kube-system_5d83471f981b1644e30c11cc642c68f7_0
abd1e3377560   b6d7abedde39                                        "kube-apiserver --ada|"   18 hours ago   Up 18 hours             k8s_kube-apiserver_kube-apiserver-master_kube-system_df535ce9e2ccfb931f8e46a9b80a6218_0
df5e2a226999   f51846a4fd28                                        "kube-controller-mana|"   18 hours ago   Up 18 hours             k8s_kube-controller-manager_kube-controller-manager-master_kube-system_85ff8159d8c894c53981716f8927f187_0
b45d17ab969f   71d575efe628                                        "kube-scheduler --aua|"   18 hours ago   Up 18 hours             k8s_kube-scheduler_kube-scheduler-master_kube-system_77a51208064a0e9b17209ee62638dfcd_0
3cf0d75ad0f0   registry.aliyuncs.com/google_containers/pause:3.6   "/pause"                 18 hours ago   Up 18 hours             k8s_POD_kube-apiserver-master_kube-system_df535ce9e2ccfb931f8e46a9b80a6218_0
6b447aa2fd93   registry.aliyuncs.com/google_containers/pause:3.6   "/pause"                 18 hours ago   Up 18 hours             k8s_POD_etcd-master_kube-system_5d83471f981b1644e30c11cc642c68f7_0
f7f9a3cd677f   registry.aliyuncs.com/google_containers/pause:3.6   "/pause"                 18 hours ago   Up 18 hours             k8s_POD_kube-scheduler-master_kube-system_77a51208064a0e9b17209ee62638dfcd_0
20e0b291d166   registry.aliyuncs.com/google_containers/pause:3.6   "/pause"                 18 hours ago   Up 18 hours             k8s_POD_kube-controller-manager-master_kube-system_85ff8159d8c894c53981716f8927f187_0

2、網段:每個kubernetes node從中分配一個子網片段。

(1)Master節點

root@k8s:~# cat /run/flannel/subnet.env 
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1400
FLANNEL_IPMASQ=true

(2)Node1節點

root@k8s:~# cat /run/flannel/subnet.env 
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.1.1/24
FLANNEL_MTU=1400
FLANNEL_IPMASQ=true

(3)Node2節點

root@k8s:/# cat /run/flannel/subnet.env 
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.2.1/24
FLANNEL_MTU=1400
FLANNEL_IPMASQ=true

3、Kubernetes節點進程

(1)MASTER節點

root@k8s:~# ps -el | grep kube
4 S     0  6224  6152  0  80   0 - 188636 futex_ ?       00:05:00 kube-scheduler
4 S     0  6275  6196  1  80   0 - 206354 ep_pol ?       00:23:02 kube-controller
4 S     0  6287  6181  5  80   0 - 278080 futex_ ?       01:19:40 kube-apiserver
4 S     0  6501     1  3  80   0 - 487736 futex_ ?       00:46:38 kubelet
4 S     0  6846  6818  0  80   0 - 187044 futex_ ?       00:00:26 kube-proxy

(2)Node節點

# node1
root@k8s:~# ps -el | grep kube
4 S     0 22869 22845  0  80   0 - 187172 futex_ ?       00:00:23 kube-proxy
4 S     0 26395     1  2  80   0 - 505977 futex_ ?       00:28:10 kubelet
# node2
root@k8s:/# ps -el | grep kube
4 S     0 28227     1  1  80   0 - 487480 futex_ ?       00:17:26 kubelet
4 S     0 28724 28696  0  80   0 - 187044 futex_ ?       00:00:17 kube-proxy
聲明:本文內容及配圖由入駐作者撰寫或者入駐合作網站授權轉載。文章觀點僅代表作者本人,不代表電子發燒友網立場。文章及其配圖僅供工程師學習之用,如有內容侵權或者其他違規問題,請聯系本站處理。 舉報投訴
  • MASTER
    +關注

    關注

    0

    文章

    99

    瀏覽量

    11219
  • init
    +關注

    關注

    0

    文章

    15

    瀏覽量

    3411
  • node
    +關注

    關注

    0

    文章

    23

    瀏覽量

    5916
  • kubernetes
    +關注

    關注

    0

    文章

    222

    瀏覽量

    8655
收藏 人收藏

    評論

    相關推薦

    阿里云上Kubernetes集群聯邦

    摘要: kubernetes集群讓您能夠方便的部署管理運維容器化的應用。但是實際情況中經常遇到的一些問題,就是單個集群通常無法跨單個云廠商的多個Region,更不用說支持跨跨域不同的云
    發表于 03-12 17:10

    使用Helm 在容器服務k8s集群一鍵部署wordpress

    摘要: Helm 是啥? 微服務和容器化給復雜應用部署與管理帶來了極大的挑戰。Helm是目前Kubernetes服務編排領域的唯一開源子項目,做為Kubernetes應用的一個包管理工具,可理解
    發表于 03-29 13:38

    Kubernetes Ingress 高可靠部署最佳實踐

    摘要: 在Kubernetes集群中,Ingress作為集群流量接入層,Ingress的高可靠性顯得尤為重要,今天我們主要探討如何部署一套高性能高可靠的Ingress接入層。簡介
    發表于 04-17 14:35

    阿里云宣布推出Serverless Kubernetes服務 30秒即可完成應用部署

    Serverless形態。開發者可在5秒內創建集群、30秒部署應用上線。用戶無需管理集群基礎設施,還可根據應用實際消耗資源按量付費,此舉意在進一步降低容器技術的使用門檻,簡化容器平臺運維的復雜度。該服
    發表于 05-03 15:38

    kubernetes集群配置

    基于v1104版本手動搭建高可用kubernetes 集群
    發表于 08-19 08:07

    Kubernetes 從懵圈到熟練:集群服務的三個要點和一種實現

    照進現實前邊兩節,我們看到了,Kubernetes 集群的服務,本質上是負載均衡,即反向代理;同時我們知道了,在實際實現中,這個反向代理,并不是部署集群某一個節點上,而是作為
    發表于 09-24 15:35

    kubernetes v112二進制方式集群部署

    kubernetes v112 二進制方式集群部署
    發表于 05-05 16:30

    redis集群的如何部署

    redis集群部署(偽分布式)
    發表于 05-29 17:13

    請問鴻蒙系統上可以部署kubernetes集群嗎?

    鴻蒙系統上可以部署kubernetes集群
    發表于 06-08 11:16

    如何部署基于Mesos的Kubernetes集群

    的內核。把Kubernetes運行在Mesos集群之上,可以和其他的框架共享集群資源,提高集群資源的利用率。 本文是Kubernetes和M
    發表于 10-09 18:04 ?0次下載
    如何<b class='flag-5'>部署</b>基于Mesos的<b class='flag-5'>Kubernetes</b><b class='flag-5'>集群</b>

    Kubernetes 集群的功能

    Telepresence 是一個開源工具,可讓您在本地運行單個服務,同時將該服務連接到遠程 Kubernetes 集群。
    的頭像 發表于 09-05 10:58 ?967次閱讀

    Kubernetes集群的關閉與重啟

    在日常對 Kubernetes 集群運行維護的過程中,您可能需要臨時的關閉或者是重啟 Kubernetes 集群集群進行維護,本文將介紹如
    的頭像 發表于 11-07 09:50 ?9537次閱讀

    Kubernetes集群部署一個ChatGPT機器人

    Robusta 有一個用于集成的 UI,也有一個預配置的 Promethus 系統,如果你還沒有自己的 K8s 集群,并且想嘗試一下這個 ChatGPT 機器人,你可以使用 Robusta 現有的!
    的頭像 發表于 03-07 09:33 ?1202次閱讀

    使用Jenkins和單個模板部署多個Kubernetes組件

    在持續集成和部署中,我們通常需要部署多個實例或組件到Kubernetes集群中。通過Jenkins的管道腳本,我們可以自動化這個過程。在本文中,我將演示如何使用Jenkins Pipe
    的頭像 發表于 01-02 11:40 ?551次閱讀
    使用Jenkins和單個模板<b class='flag-5'>部署</b>多個<b class='flag-5'>Kubernetes</b>組件

    使用Velero備份Kubernetes集群

    Velero 是 heptio 團隊(被 VMWare 收購)開源的 Kubernetes 集群備份、遷移工具。
    的頭像 發表于 08-05 15:43 ?216次閱讀
    使用Velero備份<b class='flag-5'>Kubernetes</b><b class='flag-5'>集群</b>