容器编排K8S

k8s概述

容器部署优势:部署方便,不依赖底层环境,升级镜像

  • 本质是一个容器编排工具,golang语言开发

master master管理节点:kube-api-server请求接口,kube-scheduler调度器,kube-controller-manager控制器/管理器,etcd分布式存储数据库
work node服务节点:kubelet代理保证容器运行在pod中,kube-proxy网络代理[一组容器的统一接口]

在 Kubernetes 中,负责管理容器生命周期的核心组件是 kubelet

k8s安装和部署

1.源码包安装

2.使用kubeadm部署集群

使用 kubeadm 创建集群 | Kubernetes

centos7.9
##https://kubernetes.io/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init/---------------------所有主机均配置基础环境【这里以master为例】
[root@node1 ~]# cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
> overlay
> br_netfilter
> EOF
[root@master ~]# 
[root@master ~]# sudo modprobe overlay
[root@master ~]# sudo modprobe br_netfilter
## 设置所需的 sysctl 参数,参数在重新启动后保持不变
[root@master ~]# cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
> net.bridge.bridge-nf-call-iptables  = 1
> net.bridge.bridge-nf-call-ip6tables = 1
> net.ipv4.ip_forward                 = 1
> EOF## 应用 sysctl 参数而不重新启动
[root@master ~]# sudo sysctl --system[root@master ~]# lsmod | grep br_netfilter
br_netfilter           22256  0 
bridge                155432  1 br_netfilter
[root@master ~]# lsmod | grep overlay
overlay                91659  0 
##查看版本
[root@master ~]# cat /etc/redhat-release 
CentOS Linux release 7.9.2009 (Core)
##安装容器源[centos查看aliyun源]
[root@master ~]# yum install -y yum-utils
[root@master ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@master ~]# ls /etc/yum.repos.d/
CentOS-Base.repo  docker-ce.repo  epel.repo
##安装容器
[root@master ~]# yum install  containerd.io -y
[root@master ~]# containerd config default > /etc/containerd/config.toml
[root@master ~]# vim /etc/containerd/config.toml
#将sandbox镜像注释并修改为阿里云的路径
#    sandbox_image = "registry.k8s.io/pause:3.6"sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"
#查找此行并修改为trueSystemdCgroup = true[root@master ~]# systemctl enable containerd --now
##需要看到启动成功 
[root@master ~]# systemctl status containerd 
[root@master ~]# cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.28/rpm/
> enabled=1
> gpgcheck=1
> gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.28/rpm/repodata/repomd.xml.key
> EOF##检查基础环境
[root@master ~]# free -h total        used        free      shared  buff/cache   available
Mem:           7.4G        237M        6.0G        492K        1.1G        6.9G
Swap:            0B          0B          0B
[root@master ~]# cat /etc/fstab #
# /etc/fstab
# Created by anaconda on Fri Jun 28 04:16:23 2024
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=c8b5b2da-5565-4dc1-b002-2a8b07573e22 /                       ext4    defaults        1 1
[root@master ~]# netstat -tunlp |grep 6443
[root@master ~]# getenforce 
Disabled
##安装kubeadm
[root@master ~]# yum install -y kubelet kubeadm kubectl
[root@node1 ~]# systemctl enable kubelet --now---------------------------仅在master执行
##执行初始化操作【--apiserver-advertise-address为master的IP地址】
[root@master ~]# kubeadm init \
> --apiserver-advertise-address=192.168.88.1 \
> --image-repository registry.aliyuncs.com/google_containers \
> --service-cidr=172.10.0.0/12 \
> --pod-network-cidr=10.10.0.0/16 \
> --ignore-preflight-errors=all 
I0722 15:36:54.413254   12545 version.go:256] remote version is much newer: v1.33.3; falling back to: stable-1.28
[init] Using Kubernetes version: v1.28.15
[preflight] Running pre-flight checks[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0722 15:37:11.464233   12545 checks.go:835] detected that the sandbox image "registry.aliyuncs.com/google_containers/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [172.0.0.1 192.168.88.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.88.1 127.0.0.1 ::1]
"/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
...
Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.88.1:6443 --token 91kaxu.trpl8qwjaumnc910 \--discovery-token-ca-cert-hash sha256:fdd6b2c0f3e0ec81b3d792c34d925b3c688147d7a87b0993de050460f19adec5 
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
-----------------------------在node节点与master建立连接
[root@node1 ~]# kubeadm join 192.168.88.1:6443 --token 91kaxu.trpl8qwjaumnc910 \
> --discovery-token-ca-cert-hash sha256:fdd6b2c0f3e0ec81b3d792c34d925b3c688147d7a87b0993de050460f19adec5
[preflight] Running pre-flight checks[WARNING Hostname]: hostname "node1" could not be reached[WARNING Hostname]: hostname "node1": lookup node1 on 100.100.2.138:53: no such host
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.[root@node1 ~]# -----------------------------在master节点查看状态
[root@master ~]# kubectl get node 
NAME     STATUS     ROLES           AGE   VERSION
master   NotReady   control-plane   96s   v1.28.15
node1    NotReady   <none>          22s   v1.28.15
node2    NotReady   <none>          7s    v1.28.15
##上传网络插件
[root@master ~]# ls
20250621calico.yaml
##修改网络插件的位置
[root@master ~]# mkdir k8s/calico -p 
[root@master ~]# cd k8s/calico/
[root@master calico]# mv /root/20250621calico.yaml   .
[root@master calico]# ls
20250621calico.yaml
[root@master calico]# kubectl create -f 20250621calico.yaml 
##查看节点状态
[root@master ~]# kubectl get node 
NAME     STATUS   ROLES           AGE     VERSION
master   Ready    control-plane   5m46s   v1.28.15
node1    Ready    <none>          4m32s   v1.28.15
node2    Ready    <none>          4m17s   v1.28.15
##查看系统名称空间的pod
[root@master calico]# watch kubectl get pod -n kube-system
.../如图所示:
Every 2.0s: kubectl get pod -n kube-system                                                                                  Tue Jul 22 22:18:26 2025NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-6fcd5cd66f-gcv2q   1/1     Running   0          6h36m
calico-node-bbqnz                          1/1     Running   0          6h36m
calico-node-ls7gm                          1/1     Running   0          6h36m
calico-node-n6fz5                          1/1     Running   0          6h36m
coredns-66f779496c-jnc4h                   1/1     Running   0          6h40m
coredns-66f779496c-x79tt                   1/1     Running   0          6h40m
etcd-master                                1/1     Running   0          6h40m
kube-apiserver-master                      1/1     Running   0          6h40m
kube-controller-manager-master             1/1     Running   0          6h40m
kube-proxy-6jpfs                           1/1     Running   0          6h39m
kube-proxy-6mxx6                           1/1     Running   0          6h40m
kube-proxy-cn26w                           1/1     Running   0          6h39m
kube-scheduler-master                      1/1     Running   0          6h40m##查看详细信息,方便排错
[root@master ~]# kubectl describe pod kube-proxy-vfdmh -n kube-system
[root@master ~]# #所有节点Ready 集群就安装ok了!
[root@master ~]# 
[root@master ~]# #结束
 注释:
##初始化集群【**仅在master节点执行**】
[root@master containerd]# kubeadm init \
--apiserver-advertise-address=10.38.102.71 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.26.3 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--ignore-preflight-errors=all
##说明:--apiserver-advertise-address=10.38.102.71 \ #换成自己的master地址
##显示此行表示:初始化成功
Your Kubernetes control-plane has initialized successfully![root@master containerd]# kubectl get nodes
E0702 02:26:32.034057    8125 memcache.go:265] couldn''t get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
##根据提示申明环境变量
[root@master containerd]# export KUBECONFIG=/etc/kubernetes/admin.conf
##开机自启
[root@master containerd]# vim /etc/profile
[root@master containerd]# tail -1 /etc/profile
export KUBECONFIG=/etc/kubernetes/admin.conf
##查看节点
[root@master containerd]# kubectl get nodes
NAME     STATUS     ROLES           AGE     VERSION
master   NotReady   control-plane   2m35s   v1.26.3
##初始化成功后,服务将默认启动
[root@master containerd]# systemctl status kubelet.service
● kubelet.service - kubelet: The Kubernetes Node AgentLoaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)Drop-In: /usr/lib/systemd/system/kubelet.service.d└─10-kubeadm.confActive: active (running) since Wed 2025-07-02 02:24:44 EDT; 4min 19s agoDocs: https://kubernetes.io/docs/
##查看pod,指定在系统的名称空间中查看
[root@master containerd]# kubectl get pod -n kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-5bbd96d687-hpnxw         0/1     Pending   0          5m24s
coredns-5bbd96d687-rfq5n         0/1     Pending   0          5m24s
etcd-master                      1/1     Running   0          5m38s
kube-apiserver-master            1/1     Running   0          5m38s
kube-controller-manager-master   1/1     Running   0          5m38s
kube-proxy-dhsn5                 1/1     Running   0          5m24s
kube-scheduler-master            1/1     Running   0          5m38s##安装网络插件[calico,flannel]
[root@master containerd]# wget http://manongbiji.oss-cn-beijing.aliyuncs.com/ittailkshow/k8s/download/calico.yaml
#正常安装显示如下...
HTTP request sent, awaiting response... 200 OK
Length: 239997 (234K) [text/yaml]
Saving to: ‘calico.yaml’
100%[===============================================================================================>] 239,997     --.-K/s   in 0.06s2025-07-02 02:36:42 (4.03 MB/s) - ‘calico.yaml’ saved [239997/239997]
[root@master containerd]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
#正常安装显示如下...
HTTP request sent, awaiting response... 200 OK
Length: 4415 (4.3K) [text/plain]
Saving to: ‘kube-flannel.yml’100%[===============================================================================================>] 4,415        235B/s   in 19s2025-07-02 02:38:53 (235 B/s) - ‘kube-flannel.yml’ saved [4415/4415][root@master containerd]# ls
calico.yaml  config.toml  config.toml.bak  kube-flannel.yml
[root@master containerd]# mv *.yml /root
[root@master containerd]# mv *.yaml /root
[root@master containerd]# ll
total 12
-rw-r--r--. 1 root root 7074 Jul  2 02:18 config.toml
-rw-r--r--. 1 root root  886 Jun  5  2024 config.toml.bak
##应用文件
[root@master ~]# kubectl apply -f calico.yaml
[root@master ~]# kubectl apply -f kube-flannel.yml
##查看系统名称空间中的pod
[root@master ~]# kubectl get pod -n kube-system
NAME                                       READY   STATUS              RESTARTS   AGE
calico-kube-controllers-6bd6b69df9-gpt7z   0/1     ContainerCreating   0          80s
calico-node-5cnrq                          0/1     Init:2/3            0          81s
calico-typha-77fc8866f5-v764n              0/1     Pending             0          80s
coredns-5bbd96d687-hpnxw                   0/1     ContainerCreating   0          17m
coredns-5bbd96d687-rfq5n                   0/1     ContainerCreating   0          17m
etcd-master                                1/1     Running             0          17m
kube-apiserver-master                      1/1     Running             0          17m
kube-controller-manager-master             1/1     Running             0          17m
kube-proxy-dhsn5                           1/1     Running             0          17m
kube-scheduler-master                      1/1     Running             0          17m
##查看节点
[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES           AGE   VERSION
master   Ready    control-plane   18m   v1.26.3
 集群管理命令
##查看节点信息
[root@master ~]# kubectl get node
NAME     STATUS   ROLES           AGE    VERSION
master   Ready    control-plane   106m   v1.26.3
node1    Ready    <none>          53m    v1.26.3
node2    Ready    <none>          49m    v1.26.3
##查看节点详细信息
[root@master ~]# kubectl get nodes -o wide
NAME     STATUS   ROLES           AGE    VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION          CONTAINER-RUNTIME
master   Ready    control-plane   108m   v1.26.3   10.38.102.71   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   containerd://1.6.33
node1    Ready    <none>          55m    v1.26.3   10.38.102.72   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   containerd://1.6.33
node2    Ready    <none>          51m    v1.26.3   10.38.102.73   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   containerd://1.6.33
##查看get帮助
[root@master ~]# kubectl get -h
##查看describe帮助
[root@master ~]# kubectl describe -h
##描述指定节点详情
[root@master ~]# kubectl describe node master
##查看系统名称空间的所有pod
[root@master ~]# kubectl get pod -n kube-system
##查看所有名称空间运行的pod
[root@master ~]# kubectl get pod -A 
##查看所有名称空间运行的pod详情
[root@master ~]# kubectl get pod -A -o wide

集群核心概念 

  • pod 最小调度或管理单元【一个pod有一个或多个容器】
  • service 会记录pod信息,记录与pod之间有关系会进行负载均衡,提供接口,用户通过service访问pod【由于访问pod可以使用IP,但是IP会改变;使用使用service为一组pod提供接口】
  • label给k8s资源对象打上标签
  • label selector 标签选择器,对资源进行选择【service进行选择】
  • replication controller 控制器,副本控制器,时刻保持pod数量达到用户的期望值【控制pod数量】
  • replication controller manager 副本控制器管理器【监视各种控制器,是一个管理组件】
  • scheduler 调度器,接受api-server访问请求,实现pod在某台k8s node上运行【控制pod在哪个节点上运行】
  • DNS 通过DNS解决集群内资源名称,达到访问资源目的【负责集群内部的名称解析】
  • namespace名称空间 K8S中非常重要的资源,主要用来实现多套环境的资源隔离或多租户的资源隔离【实现多套环境资源隔离或多租户的资源隔离,常见的资源对象都需要放在名称空间中】

资源对象介绍

 

无状态服务:所有节点的关系都是等价的
有状态服务:节点身份不对等,有主从;对数据持久存储

--------------------------------------核心概念.名称空间
##查看当前所有的名称空间
#系统名称空间放的是系统的pod;不指定则为默认名称空间
[root@master ~]# kubectl get namespace
NAME              STATUS   AGE
default           Active   100m
kube-flannel      Active   83m
kube-node-lease   Active   100m
kube-public       Active   100m
kube-system       Active   100m
#简写,查看所有名称空间
[root@master ~]# kubectl get ns
NAME              STATUS   AGE
default           Active   101m
kube-flannel      Active   84m
kube-node-lease   Active   101m
kube-public       Active   101m
kube-system       Active   101m
##创建名为wll的名称空间【也可以使用yaml文件来创建】
[root@master ~]# kubectl create ns wll
namespace/wll created
[root@master ~]# kubectl get ns
NAME              STATUS   AGE
default           Active   102m
kube-flannel      Active   85m
kube-node-lease   Active   102m
kube-public       Active   102m
kube-system       Active   102m
wll               Active   2s
##删除名为wll的名称空间
[root@master ~]# kubectl delete ns wll
namespace "wll" deleted
[root@master ~]# kubectl get ns
NAME              STATUS   AGE
default           Active   103m
kube-flannel      Active   86m
kube-node-lease   Active   103m
kube-public       Active   103m
kube-system       Active   103m------------------------------------核心概念.标签
label标签是一组绑定到k8s资源上的key/value键值对,可以通过多维度定义标签。同一个资源对象上,key不能重复,必须唯一。
##查看节点标签信息
[root@master ~]# kubectl get nodes --show-labels
NAME     STATUS   ROLES           AGE    VERSION   LABELS
master   Ready    control-plane   112m   v1.26.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
node1    Ready    <none>          59m    v1.26.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node1,kubernetes.io/os=linux
node2    Ready    <none>          55m    v1.26.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2,kubernetes.io/os=linux
##为node2加上标签【显示节点2被标记】
[root@master ~]# kubectl label node node2 env=test
node/node2 labeled
##查看指定节点的标签
[root@master ~]# kubectl get nodes node2 --show-labels
NAME    STATUS   ROLES    AGE   VERSION   LABELS
node2   Ready    <none>   63m   v1.26.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=test,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2,kubernetes.io/node2=test,kubernetes.io/os=linux
##只描述指定标签
[root@master ~]# kubectl get nodes -L env
NAME     STATUS   ROLES           AGE    VERSION   ENV
master   Ready    control-plane   121m   v1.26.3
node1    Ready    <none>          68m    v1.26.3
node2    Ready    <none>          64m    v1.26.3   test
##查找具有指定标签的节点【查找具备这个标签的节点】
[root@master ~]# kubectl get nodes -l env=test
NAME    STATUS   ROLES    AGE   VERSION
node2   Ready    <none>   66m   v1.26.3
##修改标签的值
#--overwrite=true 允许覆盖
[root@master ~]# kubectl label node node2 env=dev --overwrite=true
node/node2 not labeled
##查看指定节点的标签信息
[root@master ~]# kubectl get nodes node2 --show-labels
NAME    STATUS   ROLES    AGE   VERSION   LABELS
node2   Ready    <none>   70m   v1.26.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=dev,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2,kubernetes.io/node2=test,kubernetes.io/os=linux
##删除标签【删除键】
[root@master ~]# kubectl label node node2 env-
node/node2 unlabeled
[root@master ~]# kubectl get nodes node2 --show-labels
NAME    STATUS   ROLES    AGE   VERSION   LABELS
node2   Ready    <none>   72m   v1.26.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2,kubernetes.io/node2=test,kubernetes.io/os=linux##标签选择器主要有2类:
数值关系:=,!=
集合关系:key in {value1,value2....}##定义标签
[root@master ~]# kubectl label node node2 bussiness=game
node/node2 labeled
[root@master ~]# kubectl label node node1 bussiness=ad
node/node1 labeled
##通过集合的方式选择
[root@master ~]# kubectl get node -l "bussiness in (game,ad)"
NAME    STATUS   ROLES    AGE   VERSION
node1   Ready    <none>   80m   v1.26.3
node2   Ready    <none>   76m   v1.26.3---------------------------------------资源对象.注解
Annotation升级的时候打注解比较好用,回滚的时候方便查看

 pod基本概念

pod分类 

pod的yaml文件
#apiversion版本
#kind类型
#metadata元数据
##name名称;namespace属于哪个名称空间,默认在default
#namespace名称空间
##labels自定义标签
##name自定义标签名称;annotations注释列表
#spec详细定义(期望)
##containers容器列表
###name容器名称;images镜像名称;imagePullPolicy镜像拉取策略【Always一直去拉取;Never从不拉取;IfNotPresent优先本地,本地没有则拉取】;command容器启动命令【参数和工作目录】;volumeMounts挂载容器内部的存储卷配置
####env环境变量
####resources限制资源,默认根据需要分配
#####limits限制资源上限
#####requests请求资源下限
####livenessprobe检查
首次探测时间:表示延时一段时间再检查
##定义重启策略
##查看yaml的第一层【查看FIELDS:】
[root@master ~]#  kubectl explain pod
##查看指定属性包含的配置项【查看FIELDS:】
[root@master ~]#  kubectl explain pod.metadata
##查看版本
[root@master ~]# kubectl api-versions
##查看详细版本
[root@master ~]#  kubectl api-resources | grep pod
pods                              po           v1                                     true         Pod
##创建pod
[root@master tmp]# vim pod1.yml
[root@master tmp]# kubectl apply -f pod1.yml
pod/nginx created
[root@master tmp]# cat pod1.yml
---
apiVersion: v1
kind: Pod
metadata:name: nginxlabels:name: ng01
spec:containers:- name: nginximage: nginx:1.20ports:- name: webportcontainerPort: 80##查看默认pod
[root@master tmp]# kubectl get pod
NAME    READY   STATUS              RESTARTS   AGE
nginx   0/1     ContainerCreating   0          70s
[root@master tmp]# kubectl get pod -n default
NAME    READY   STATUS              RESTARTS   AGE
nginx   0/1     ContainerCreating   0          109s
##查看pod中nginx详细信息
[root@master tmp]# kubectl describe pod nginx
##删除pod
[root@master tmp]# kubectl delete pod nginx

 

 deployment控制器资源

deployment资源控制器创建pod副本
[root@master ~]# vim deploy.yaml
[root@master tmp]# cat deploy.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:name: nginx-deploymentlabels:app: nginx
spec:replicas: 5selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:                 ## 注意:这里的 spec 与 template.metadata 同级,缩进需保持一致containers: - name: nginximage: registry.cn-shanghai.aliyuncs.com/image_lqkhn/nginx:1.25.1-alpineports: - containerPort: 80
##查看pod
[root@master ~]# kubectl get pod
No resources found in default namespace.
#使用deployment创建pod
[root@master tmp]# kubectl apply -f deploy.yaml 
deployment.apps/nginx-deployment created
[root@master tmp]# kubectl get deploy
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   5/5     5            5           11s
[root@master tmp]# kubectl get pod
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-74bd6d8c48-4wlkz   1/1     Running   0          15s
nginx-deployment-74bd6d8c48-6zdmq   1/1     Running   0          15s
nginx-deployment-74bd6d8c48-mlsq4   1/1     Running   0          15s
nginx-deployment-74bd6d8c48-vcmnk   1/1     Running   0          15s
nginx-deployment-74bd6d8c48-xvn82   1/1     Running   0          15s
[root@master tmp]# kubectl get rs
NAME                          DESIRED   CURRENT   READY   AGE
nginx-deployment-74bd6d8c48   5         5         5       20s
##删除Deployment,它会自动级联删除所有关联的 ReplicaSet 和 Pod
##查看资源
[root@master tmp]# kubectl get deployment
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   0/5     5            0           13h
[root@master tmp]# kubectl get pod
NAME                               READY   STATUS             RESTARTS   AGE
nginx-deployment-584f64656-44p4s   0/1     ImagePullBackOff   0          13h
nginx-deployment-584f64656-kh4bj   0/1     ImagePullBackOff   0          13h
nginx-deployment-584f64656-nrb9m   0/1     ImagePullBackOff   0          13h
nginx-deployment-584f64656-zc4xx   0/1     ErrImagePull       0          13h
nginx-deployment-584f64656-zdqx2   0/1     ImagePullBackOff   0          13h
[root@master tmp]# kubectl get rs
NAME                         DESIRED   CURRENT   READY   AGE
nginx-deployment-584f64656   5         5         0       13h
##删除控制器资源
[root@master tmp]# kubectl delete deployment nginx-deployment
deployment.apps "nginx-deployment" deleted
##查看
[root@master tmp]# kubectl get deployments,pods,replicasets
No resources found in default namespace.
水平扩展
deployment集成了滚动升级,创建pod副本数量等功能,包含并使用了RS
1.deployment中pod副本,水平扩展
##方法一:直接编辑资源
[root@master tmp]# kubectl edit deploy nginx-deployment
#/...
#spec:
#  progressDeadlineSeconds: 600
#  replicas: 3   ##直接将副本数量修改为3个,保存退出文件,
#.../
deployment.apps/nginx-deployment edited
##查看pod,可立即动态生效
[root@master tmp]# kubectl get pod
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-74bd6d8c48-4wlkz   1/1     Running   0          28m
nginx-deployment-74bd6d8c48-mlsq4   1/1     Running   0          28m
nginx-deployment-74bd6d8c48-vcmnk   1/1     Running   0          28m
##方式二:直接修改deploy.yaml文件
需要重新启用才能生效##方式三:命令行操作
[root@master tmp]# kubectl scale --replicas=2 deploy/nginx-deployment
deployment.apps/nginx-deployment scaled
[root@master tmp]# kubectl get pod
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-74bd6d8c48-mlsq4   1/1     Running   0          30m
nginx-deployment-74bd6d8c48-vcmnk   1/1     Running   0          30m
 更新deployment
  • 更新过程
  1. 更新前:deployment管理的RS为"74bd6d"开头,有五个pod副本数量
  2. 更新后:生成了新的RS为"5d55b"开头,有五个pod副本数量
  3. 更新过程:杀死旧的RS下面的pod
模板标签或容器镜像被更新,才会触发更新
##方式一:直接修改yaml文件
##将镜像版本更新为1.27
[root@master tmp]# vim deploy.yaml 
[root@master tmp]# cat deploy.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:name: nginx-deploymentlabels:app: nginx
spec:replicas: 5selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers: - name: nginximage: registry.cn-shanghai.aliyuncs.com/aliyun_lqkhn/nginx:1.27.1ports: - containerPort: 80
##查看更新过程,称为更新策略(rolling update滚动更新)【前缀5d55和74bd6为RS】
[root@master tmp]# kubectl apply -f deploy.yaml 
deployment.apps/nginx-deployment configured
[root@master tmp]# kubectl get pod
NAME                                READY   STATUS              RESTARTS   AGE
nginx-deployment-5b55b95f79-9vjl8   0/1     ContainerCreating   0          9s
nginx-deployment-5b55b95f79-b9wqx   0/1     ContainerCreating   0          9s
nginx-deployment-5b55b95f79-jfkhj   0/1     ContainerCreating   0          9s
nginx-deployment-74bd6d8c48-5kzhx   1/1     Running             0          9s
nginx-deployment-74bd6d8c48-6g4rd   1/1     Running             0          9s
nginx-deployment-74bd6d8c48-mlsq4   1/1     Running             0          35m
nginx-deployment-74bd6d8c48-vcmnk   1/1     Running             0          35m
##此时运行的为新版本
[root@master tmp]# kubectl get pod
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-5b55b95f79-77c9r   1/1     Running   0          43s
nginx-deployment-5b55b95f79-9vjl8   1/1     Running   0          55s
nginx-deployment-5b55b95f79-b9wqx   1/1     Running   0          55s
nginx-deployment-5b55b95f79-jfkhj   1/1     Running   0          55s
nginx-deployment-5b55b95f79-pgx6n   1/1     Running   0          43s
##查看某一个pod详细信息,其中包含镜像版本
[root@master tmp]# kubectl describe pod nginx-deployment-5b55b95f79-pgx6n
...
Containers:nginx:Container ID:   containerd://707406779de1a7a08f11c9fa22123d662ff88caf2b9ceea9f4b57a7d619b84e5Image:          registry.cn-shanghai.aliyuncs.com/aliyun_lqkhn/nginx:1.27.1##查看deployment详情,观察更新过程
[root@master tmp]# kubectl describe deployment
...
Events:Type    Reason             Age                From                   Message----    ------             ----               ----                   -------Normal  ScalingReplicaSet  50m                deployment-controller  Scaled up replica set nginx-deployment-74bd6d8c48 to 5Normal  ScalingReplicaSet  24m                deployment-controller  Scaled down replica set nginx-deployment-74bd6d8c48 to 3 from 5Normal  ScalingReplicaSet  15m                deployment-controller  Scaled up replica set nginx-deployment-74bd6d8c48 to 5 from 2Normal  ScalingReplicaSet  14m                deployment-controller  Scaled up replica set nginx-deployment-5b55b95f79 to 2Normal  ScalingReplicaSet  14m                deployment-controller  Scaled down replica set nginx-deployment-74bd6d8c48 to 4 from 5   ##旧pod从5到4Normal  ScalingReplicaSet  14m                deployment-controller  Scaled up replica set nginx-deployment-5b55b95f79 to 3 from 2    ##新pod从2到3Normal  ScalingReplicaSet  14m (x2 over 19m)  deployment-controller  Scaled down replica set nginx-deployment-74bd6d8c48 to 2 from 3Normal  ScalingReplicaSet  14m                deployment-controller  Scaled down replica set nginx-deployment-74bd6d8c48 to 3 from 4Normal  ScalingReplicaSet  14m                deployment-controller  Scaled up replica set nginx-deployment-5b55b95f79 to 4 from 3Normal  ScalingReplicaSet  14m                deployment-controller  Scaled up replica set nginx-deployment-5b55b95f79 to 5 from 4Normal  ScalingReplicaSet  14m (x2 over 14m)  deployment-controller  (combined from similar events): Scaled down replica set nginx-deployment-74bd6d8c48 to 0 from 1##查看deployment
[root@master tmp]# kubectl describe deployment
Name:                   nginx-deployment
Namespace:              default
CreationTimestamp:      Wed, 09 Jul 2025 09:03:18 +0800
Labels:                 app=nginx
Annotations:            deployment.kubernetes.io/revision: 2
Selector:               app=nginx
Replicas:               5 desired | 5 updated | 5 total | 5 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge    
##表示至少%75处于运行状态,默认可以超出%25
eg:如果期望值为8,pod数量只能处于6~10如上期望值为5,pod可以减少或增加的数量为1~2个,有时是一个有时是两个
##可以通过RollingUpdateStrategy调整更新进度
 deployment回滚
##查看版本历史
[root@master tmp]# kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment 
REVISION  CHANGE-CAUSE
1         <none>
2         <none>
##查看版本记录的详细信息
[root@master tmp]# kubectl rollout history deployment nginx-deployment --revision=1
deployment.apps/nginx-deployment with revision #1
Pod Template:Labels:	app=nginxpod-template-hash=74bd6d8c48Containers:ngin

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。
如若转载,请注明出处:http://www.pswp.cn/news/915828.shtml
繁体地址,请注明出处:http://hk.pswp.cn/news/915828.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

C语言:深入理解指针(1)

1. 内存和地址在了解指针前&#xff0c;我们需要知道内存和地址是什么。1.1 内存首先来看内存。举个例子&#xff1a;当你在酒店找房间时&#xff0c;你并不是一层一层一间一间找&#xff0c;而是通过酒店为每间房子设置的门牌号直接找到你的房间&#xff0c;这样的效率就会快很…

完整的 SquareStudio 注册登录功能实现方案:已经烧录到开发板正常使用

根据你的需求&#xff0c;我将提供完整的实现方案&#xff0c;解决按钮同时执行多个动作的问题&#xff0c;并确保注册登录功能正常工作。所需文件结构需要创建和修改的文件如下&#xff1a;ui_events.h - 事件处理函数声明events.c - 实际的事件处理逻辑ui.c - UI 初始化和事件…

OkHttp 与 Chuck 结合使用:优雅的 Android 网络请求调试方案

前言在 Android 应用开发过程中&#xff0c;网络请求调试是日常工作中不可或缺的一部分。Chuck 是一个轻量级的 OkHttp 拦截器&#xff0c;能够在应用内直接显示网络请求和响应的详细信息&#xff0c;无需连接电脑或使用额外工具。本文将详细介绍如何将 OkHttp 与 Chuck 结合使…

AI学习--本地部署ollama

AI小白&#xff0c;记录下本地部署ollama1.下载安装ollama下载地址ollama官方地址https://ollama.com/download根据系统下载即可下面是我下载的地址 https://release-assets.githubusercontent.com/github-production-release-asset/658928958/e8384a9d-8b1e-4742-9400-7a0ce2a…

docker 设置镜像仓库代理

1. 创建 Docker 服务的代理配置文件sudo mkdir -p /etc/systemd/system/docker.service.d2.创建文件 /etc/systemd/system/docker.service.d/http-proxy.conf&#xff0c;内容如下&#xff1a;[Service] Environment"HTTP_PROXYhttp://192.168.0.111:7890" Environme…

ffmpeg rtsp 丢包处理

直接用 demux 在有些网络中&#xff0c;丢包画屏&#xff1b; 再ffmpeg和ffplay中&#xff0c;可以指定 tcp 方式&#xff0c;所以代码直接设置陶瓷坯方式&#xff1b; // 设置RTSP选项优化接收数据流防止数据包丢失 av_dict_set(&options, "rtsp_transport", …

aosp15实现SurfaceFlinger的dump输出带上Layer详细信息踩坑笔记

背景&#xff1a; 针对上一篇文章 aosp15上SurfaceFlinger的dump部分新特性-无Layer信息输出如何解决&#xff1f; 给大家布置了一个小作业&#xff0c;那就是需要实现dumpsys SurfaceFlinger相关输出中可以携带上所有的Layer的详细信息需求&#xff0c;今天来带大家详细实现一…

Linux 网络调优指南:废弃的 tcp_tw_recycle 与安全替代方案

一、问题起源:消失的内核参数 当你在 Debian 10 系统执行 sysctl 命令时,若看到报错: sysctl: cannot stat /proc/sys/net/ipv4/tcp_tw_recycle: 没有那个文件或目录 这并非配置错误,而是Linux 内核演进的结果。自 4.12 版本起,内核正式移除了 tcp_tw_recycle 参数——…

删除有序数组中的重复项

class Solution {public int removeElement(int[] nums, int val) {// 暴力法int n nums.length;for (int i 0; i < n; i) {if (nums[i] val) {for (int j i 1; j < n; j) {nums[j - 1] nums[j];}i--;n--;}}return n;} }代码逻辑解析首先获取数组长度n&#xff0c;…

【Pytest】从配置到固件的使用指南

掌握高效测试的关键技巧&#xff0c;打造专业级自动化测试框架一、Pytest框架的核心优势 Pytest作为Python最强大的测试框架之一&#xff0c;以其简洁灵活的语法和丰富的扩展能力深受开发者喜爱。相比unittest&#xff0c;Pytest提供了更直观的测试编写方式和更强大的功能集&am…

[matlab]matlab上安装xgboost安装教程简单版

【前言】 网上基于MATLAB的xgboost安装教程太少了&#xff0c;以至于几乎搜不到&#xff0c;为此做了一个简单安装教程【安装前提】 有matlab软件&#xff0c;版本越高越好&#xff0c;我用的是2023a。理论支持matlab2018a及其以上&#xff0c;因此需要自己提前安装好matlab【安…

基于多种机器学习的成都市二手房房价分析与价格预测【城市可换、算法模型多种对比】

文章目录有需要本项目的代码或文档以及全部资源&#xff0c;或者部署调试可以私信博主项目背景数据来源与采集方式数据预处理与清洗流程探索性数据分析&#xff08;EDA&#xff09;模型构建与预测方法项目意义与应用前景相关可视化展示总结每文一语有需要本项目的代码或文档以及…

PostgreSQL 事务ID环绕问题

事务ID&#xff08;XID&#xff09;基本概念 从Transactions and Identifiers可知&#xff1a; 事务 ID&#xff0c;例如 278394&#xff0c;会根据 PostgreSQL 集群内所有数据库使用的全局计数器按顺序分配给事务。此分配会在事务首次写入数据库时进行。这意味着编号较低的 x…

高等数学-矩阵知识

好的&#xff0c;我们来详细讲解高等数学&#xff08;主要是线性代数部分&#xff09;中的核心矩阵知识。矩阵是线性代数的基石&#xff0c;广泛应用于数学、物理、工程、计算机科学、经济学等众多领域。 一、矩阵的基本概念定义&#xff1a; 一个 m n 矩阵 (Matrix) 是一个由…

React 项目性能优化概要

应用级性能优化&#xff0c;需要结合实际需求展开分析&#xff0c;通常我们需要从以下几个方面来考虑&#xff1a; 1. 识别性能瓶颈 识别性能瓶颈是优化的第一步&#xff0c;通过各种工具和方法找到影响性能的主要原因&#xff1a; React Profiler&#xff1a;使用 React Dev…

【web自动化】-5- fixture集中管理和项目重构

一、投标用例设计 # 定义让前台页面保持自动登录的fixture pytest.fixture() def user_driver():driver webdriver.Chrome()driver.get("http://47.107.116.139/fangwei/")driver.maximize_window()# 创建页面类对象page ReceptionLoginPage(driver)# 通过页面类对…

Dify工作流:爬虫文章到AI知识库

部署Dify 代码拉取 git clone https://github.com/langgenius/dify.git cd dify/docker启动容器 docker-compose up -d启动成功准备知识库 创建知识库 创建一个空的知识库要先从网址中&#xff0c;找到这个知识库的id&#xff0c;记下后面需要用到。新建API密钥 创建密钥&#…

支付鉴权方案介绍

前后端交互中的鉴权是确保请求来源合法、身份可信、权限合适的关键手段。不同系统架构和业务类型下,使用的鉴权方式略有不同,但主要可分为以下几类: ✅ 一、前后端交互常见的鉴权方式概览 鉴权方式 优点 缺点 适用场景 Cookie + Session 简单、成熟,服务端易控制 不适合跨域…

halcon处理灰度能量图

使用halcon处理射线图像&#xff0c;对高能区域和低能区域分割处理感兴趣区域&#xff0c;筛选区域下的灰度值区间范围。图像灰度值为16位深度图。* 读取灰度图像 read_image (Image, /123.tif)** 获取图像尺寸 get_image_size (Image, Width, Height)* 分割图像为左右两部分&a…

Oracle From查看弹性域设置

打开弹性与设置&#xff1a;【应用开发员->弹性域->说明性->段】打开后界面如下&#xff1a; 把光标定位到标题&#xff0c;然后点击“手电筒”搜索名称&#xff08;名称就是你要查询的那个弹性域的名称&#xff09;我这里就是搜索“附加题头信息”&#xff0…