通过sealos工具在ubuntu 24.02上安装k8s集群

一、系统准备

(1)安装openssh服务
sudo apt install openssh-server
sudo systemctl start ssh
sudo systemctl enable ssh(2)放通防火墙
sudo ufw allow ssh(3)开通root直接登录
vim /etc/ssh/sshd_config#PermitRootLogin prohibit-password修改为PermitRootLogin yes重启
systemctl daemon-reload
systemctl restart sshd

二、安装sealos工具

(1)在master01上安装sealos工具
echo "deb [trusted=yes] https://apt.fury.io/labring/ /" | sudo tee /etc/apt/sources.list.d/labring.list
sudo apt update
sudo apt install sealosroot@master01:~# echo "deb [trusted=yes] https://apt.fury.io/labring/ /" | sudo tee /etc/apt/sources.list.d/labring.list
sudo apt update
sudo apt install sealos
deb [trusted=yes] https://apt.fury.io/labring/ /
Hit:1 http://mirrors.tuna.tsinghua.edu.cn/ubuntu noble InRelease
Hit:2 http://mirrors.tuna.tsinghua.edu.cn/ubuntu noble-updates InRelease
Hit:4 http://security.ubuntu.com/ubuntu noble-security InRelease
Hit:3 http://mirrors.tuna.tsinghua.edu.cn/ubuntu noble-backports InRelease
Ign:5 https://apt.fury.io/labring  InRelease
Ign:6 https://apt.fury.io/labring  Release
Ign:7 https://apt.fury.io/labring  Packages
Ign:8 https://apt.fury.io/labring  Translation-en
Ign:9 https://apt.fury.io/labring  Translation-en_US
Get:7 https://apt.fury.io/labring  Packages
Ign:8 https://apt.fury.io/labring  Translation-en
Ign:9 https://apt.fury.io/labring  Translation-en_US
Ign:8 https://apt.fury.io/labring  Translation-en
Ign:9 https://apt.fury.io/labring  Translation-en_US
Ign:8 https://apt.fury.io/labring  Translation-en
Ign:9 https://apt.fury.io/labring  Translation-en_US
Ign:8 https://apt.fury.io/labring  Translation-en
Ign:9 https://apt.fury.io/labring  Translation-en_US
Ign:8 https://apt.fury.io/labring  Translation-en
Ign:9 https://apt.fury.io/labring  Translation-en_US
Ign:8 https://apt.fury.io/labring  Translation-en
Ign:9 https://apt.fury.io/labring  Translation-en_US
Fetched 7,953 B in 7s (1,202 B/s)
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
294 packages can be upgraded. Run 'apt list --upgradable' to see them.
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following NEW packages will be installed:sealos
0 upgraded, 1 newly installed, 0 to remove and 294 not upgraded.
Need to get 31.5 MB of archives.
After this operation, 94.2 MB of additional disk space will be used.
Get:1 https://apt.fury.io/labring  sealos 5.0.1 [31.5 MB]
Fetched 31.5 MB in 20s (1,546 kB/s)
Selecting previously unselected package sealos.
(Reading database ... 152993 files and directories currently installed.)
Preparing to unpack .../sealos_5.0.1_amd64.deb ...
Unpacking sealos (5.0.1) ...
Setting up sealos (5.0.1) ...
root@master01:~#

三、通过sealos安装k8s

(1)通过sealos工具安装k8s 1.29.9,网络插件选择ciliumsealos run registry.cn-shanghai.aliyuncs.com/labring/kubernetes:v1.29.9 \
registry.cn-shanghai.aliyuncs.com/labring/helm:v3.9.4 \
registry.cn-shanghai.aliyuncs.com/labring/cilium:v1.13.4 \
--masters 192.168.1.98 \
--nodes 192.168.1.102,192.168.1.103 -p 'As(2dc_2saccC82'root@master01:~#
sealos run registry.cn-shanghai.aliyuncs.com/labring/kubernetes:v1.29.9 \
registry.cn-shanghai.aliyuncs.com/labring/helm:v3.9.4 \
registry.cn-shanghai.aliyuncs.com/labring/cilium:v1.13.4 \
--masters 192.168.1.98 \
--nodes 192.168.1.102,192.168.1.103 -p 'As(2dc_2saccC82'
2025-08-10T12:17:40 info Start to create a new cluster: master [192.168.1.98], worker [192.168.1.102 192.168.1.103], registry 192.168.1.98
2025-08-10T12:17:40 info Executing pipeline Check in CreateProcessor.
2025-08-10T12:17:40 info checker:hostname [192.168.1.98:22 192.168.1.102:22 192.168.1.103:22]
2025-08-10T12:17:40 info checker:timeSync [192.168.1.98:22 192.168.1.102:22 192.168.1.103:22]
2025-08-10T12:17:41 info checker:containerd [192.168.1.98:22 192.168.1.102:22 192.168.1.103:22]
2025-08-10T12:17:41 info Executing pipeline PreProcess in CreateProcessor.
Trying to pull registry.cn-shanghai.aliyuncs.com/labring/kubernetes:v1.29.9...
Getting image source signatures
Copying blob a90669518f1a done
Copying blob 45c9d75a9656 done
Copying blob 2fbba8062b0b done
Copying blob fdc3a198d6ba done
Copying config bca192f355 done
Writing manifest to image destination
Storing signatures
Trying to pull registry.cn-shanghai.aliyuncs.com/labring/helm:v3.9.4...
Getting image source signatures
Copying blob 7f5c52c74e5b done
Copying config 3376f68220 done
Writing manifest to image destination
Storing signatures
Trying to pull registry.cn-shanghai.aliyuncs.com/labring/cilium:v1.13.4...
Getting image source signatures
Copying blob 7ca2ee4eb38c done
Copying config 71aa52ad0a done
Writing manifest to image destination
Storing signatures
2025-08-10T12:19:52 info Executing pipeline RunConfig in CreateProcessor.
2025-08-10T12:19:52 info Executing pipeline MountRootfs in CreateProcessor.
2025-08-10T12:19:53 info render /var/lib/sealos/data/default/rootfs/etc/config.toml from /var/lib/sealos/data/default/rootfs/etc/config.toml.tmpl completed
2025-08-10T12:19:53 info render /var/lib/sealos/data/default/rootfs/etc/containerd.service from /var/lib/sealos/data/default/rootfs/etc/containerd.service.tmpl completed
2025-08-10T12:19:53 info render /var/lib/sealos/data/default/rootfs/etc/hosts.toml from /var/lib/sealos/data/default/rootfs/etc/hosts.toml.tmpl completed
2025-08-10T12:19:53 info render /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.service from /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.service.tmpl completed
2025-08-10T12:19:53 info render /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.yaml from /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.yaml.tmpl completed
2025-08-10T12:19:53 info render /var/lib/sealos/data/default/rootfs/etc/kubelet.service from /var/lib/sealos/data/default/rootfs/etc/kubelet.service.tmpl completed
2025-08-10T12:19:53 info render /var/lib/sealos/data/default/rootfs/etc/registry.service from /var/lib/sealos/data/default/rootfs/etc/registry.service.tmpl completed
2025-08-10T12:19:53 info render /var/lib/sealos/data/default/rootfs/etc/registry.yml from /var/lib/sealos/data/default/rootfs/etc/registry.yml.tmpl completed
2025-08-10T12:19:53 info render /var/lib/sealos/data/default/rootfs/etc/registry_config.yml from /var/lib/sealos/data/default/rootfs/etc/registry_config.yml.tmpl completed
2025-08-10T12:19:53 info render /var/lib/sealos/data/default/rootfs/etc/systemd/system/kubelet.service.d/10-kubeadm.conf from /var/lib/sealos/data/default/rootfs/etc/systemd/system/kubelet.service.d/10-kubeadm.conf.tmpl completed
192.168.1.103:22es to 192025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/config.toml from /var/lib/sealos/data/default/rootfs/etc/config.toml.tmpl completed
192.168.1.103:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/containerd.service from /var/lib/sealos/data/default/rootfs/etc/containerd.service.tmpl completed
192.168.1.103:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/hosts.toml from /var/lib/sealos/data/default/rootfs/etc/hosts.toml.tmpl completed
192.168.1.103:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.service from /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.service.tmpl completed
192.168.1.103:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.yaml from /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.yaml.tmpl completed
192.168.1.103:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/kubelet.service from /var/lib/sealos/data/default/rootfs/etc/kubelet.service.tmpl completed
192.168.1.103:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/registry.service from /var/lib/sealos/data/default/rootfs/etc/registry.service.tmpl completed
192.168.1.103:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/registry.yml from /var/lib/sealos/data/default/rootfs/etc/registry.yml.tmpl completed
192.168.1.103:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/registry_config.yml from /var/lib/sealos/data/default/rootfs/etc/registry_config.yml.tmpl completed
192.168.1.103:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/systemd/system/kubelet.service.d/10-kubeadm.conf from /var/lib/sealos/data/default/rootfs/etc/systemd/system/kubelet.service.d/10-kubeadm.conf.tmpl completed
192.168.1.102:22es to 192025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/config.toml from /var/lib/sealos/data/default/rootfs/etc/config.toml.tmpl completed
192.168.1.102:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/containerd.service from /var/lib/sealos/data/default/rootfs/etc/containerd.service.tmpl completed
192.168.1.102:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/hosts.toml from /var/lib/sealos/data/default/rootfs/etc/hosts.toml.tmpl completed
192.168.1.102:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.service from /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.service.tmpl completed
192.168.1.102:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.yaml from /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.yaml.tmpl completed
192.168.1.102:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/kubelet.service from /var/lib/sealos/data/default/rootfs/etc/kubelet.service.tmpl completed
192.168.1.102:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/registry.service from /var/lib/sealos/data/default/rootfs/etc/registry.service.tmpl completed
192.168.1.102:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/registry.yml from /var/lib/sealos/data/default/rootfs/etc/registry.yml.tmpl completed
192.168.1.102:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/registry_config.yml from /var/lib/sealos/data/default/rootfs/etc/registry_config.yml.tmpl completed
192.168.1.102:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/systemd/system/kubelet.service.d/10-kubeadm.conf from /var/lib/sealos/data/default/rootfs/etc/systemd/system/kubelet.service.d/10-kubeadm.conf.tmpl completed
2025-08-10T12:20:15 info Executing pipeline MirrorRegistry in CreateProcessor.
2025-08-10T12:20:15 info trying default http mode to sync images to hosts [192.168.1.98:22]
2025-08-10T12:20:18 info Executing pipeline Bootstrap in CreateProcessorINFO [2025-08-10 12:20:18] >> Check port kubelet port 10249..10259, reserved port 5050..5054 inuse. Please wait...
192.168.1.103:22         INFO [2025-08-10 12:20:24] >> Check port kubelet port 10249..10259, reserved port 5050..5054 inuse. Please wait...
192.168.1.102:22         INFO [2025-08-10 12:20:18] >> Check port kubelet port 10249..10259, reserved port 5050..5054 inuse. Please wait...INFO [2025-08-10 12:20:19] >> check root,port,cri success
192.168.1.103:22         INFO [2025-08-10 12:20:25] >> check root,port,cri success
192.168.1.102:22         INFO [2025-08-10 12:20:19] >> check root,port,cri success
2025-08-10T12:20:19 info domain sealos.hub:192.168.1.98 append success
192.168.1.103:22        2025-08-10T12:20:25 info domain sealos.hub:192.168.1.98 append success
192.168.1.102:22        2025-08-10T12:20:19 info domain sealos.hub:192.168.1.98 append success
Created symlink /etc/systemd/system/multi-user.target.wants/registry.service → /etc/systemd/system/registry.service.INFO [2025-08-10 12:20:20] >> Health check registry!INFO [2025-08-10 12:20:20] >> registry is runningINFO [2025-08-10 12:20:20] >> init registry success
2025-08-10T12:20:20 info domain apiserver.cluster.local:192.168.1.98 append success
192.168.1.102:22        2025-08-10T12:20:20 info domain apiserver.cluster.local:10.103.97.2 append success
192.168.1.103:22        2025-08-10T12:20:26 info domain apiserver.cluster.local:10.103.97.2 append success
192.168.1.102:22        2025-08-10T12:20:21 info domain lvscare.node.ip:192.168.1.102 append success
192.168.1.103:22        2025-08-10T12:20:27 info domain lvscare.node.ip:192.168.1.103 append success
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /etc/systemd/system/containerd.service.
192.168.1.102:22        Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /etc/systemd/system/containerd.service.
192.168.1.103:22        Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /etc/systemd/system/containerd.service.INFO [2025-08-10 12:20:23] >> Health check containerd!INFO [2025-08-10 12:20:23] >> containerd is runningINFO [2025-08-10 12:20:23] >> init containerd success
Created symlink /etc/systemd/system/multi-user.target.wants/image-cri-shim.service → /etc/systemd/system/image-cri-shim.service.
192.168.1.102:22         INFO [2025-08-10 12:20:23] >> Health check containerd!
192.168.1.102:22         INFO [2025-08-10 12:20:23] >> containerd is running
192.168.1.102:22         INFO [2025-08-10 12:20:23] >> init containerd success
192.168.1.102:22        Created symlink /etc/systemd/system/multi-user.target.wants/image-cri-shim.service → /etc/systemd/system/image-cri-shim.service.INFO [2025-08-10 12:20:24] >> Health check image-cri-shim!INFO [2025-08-10 12:20:24] >> image-cri-shim is runningINFO [2025-08-10 12:20:24] >> init shim success
127.0.0.1 localhost
::1     ip6-localhost ip6-loopback
192.168.1.103:22         INFO [2025-08-10 12:20:30] >> Health check containerd!
192.168.1.103:22         INFO [2025-08-10 12:20:30] >> containerd is running
192.168.1.103:22         INFO [2025-08-10 12:20:30] >> init containerd success
192.168.1.103:22        Created symlink /etc/systemd/system/multi-user.target.wants/image-cri-shim.service → /etc/systemd/system/image-cri-shim.service.
192.168.1.102:22         INFO [2025-08-10 12:20:24] >> Health check image-cri-shim!
192.168.1.102:22         INFO [2025-08-10 12:20:24] >> image-cri-shim is running
192.168.1.102:22         INFO [2025-08-10 12:20:24] >> init shim success
192.168.1.102:22        127.0.0.1 localhost
192.168.1.102:22        ::1     ip6-localhost ip6-loopback
192.168.1.103:22         INFO [2025-08-10 12:20:31] >> Health check image-cri-shim!
192.168.1.103:22         INFO [2025-08-10 12:20:31] >> image-cri-shim is running
192.168.1.103:22         INFO [2025-08-10 12:20:31] >> init shim success
192.168.1.103:22        127.0.0.1 localhost
192.168.1.103:22        ::1     ip6-localhost ip6-loopback
Firewall stopped and disabled on system startup
* Applying /usr/lib/sysctl.d/10-apparmor.conf ...
* Applying /etc/sysctl.d/10-bufferbloat.conf ...
* Applying /etc/sysctl.d/10-console-messages.conf ...
* Applying /etc/sysctl.d/10-ipv6-privacy.conf ...
* Applying /etc/sysctl.d/10-kernel-hardening.conf ...
* Applying /etc/sysctl.d/10-magic-sysrq.conf ...
* Applying /etc/sysctl.d/10-map-count.conf ...
* Applying /etc/sysctl.d/10-network-security.conf ...
* Applying /etc/sysctl.d/10-ptrace.conf ...
* Applying /etc/sysctl.d/10-zeropage.conf ...
* Applying /usr/lib/sysctl.d/30-tracker.conf ...
* Applying /usr/lib/sysctl.d/50-bubblewrap.conf ...
* Applying /usr/lib/sysctl.d/50-pid-max.conf ...
* Applying /usr/lib/sysctl.d/99-protect-links.conf ...
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.conf ...
kernel.apparmor_restrict_unprivileged_userns = 1
net.core.default_qdisc = fq_codel
kernel.printk = 4 4 1 7
net.ipv6.conf.all.use_tempaddr = 2
net.ipv6.conf.default.use_tempaddr = 2
kernel.kptr_restrict = 1
kernel.sysrq = 176
vm.max_map_count = 1048576
net.ipv4.conf.default.rp_filter = 2
net.ipv4.conf.all.rp_filter = 2
kernel.yama.ptrace_scope = 1
vm.mmap_min_addr = 65536
fs.inotify.max_user_watches = 65536
kernel.unprivileged_userns_clone = 1
kernel.pid_max = 4194304
fs.protected_fifos = 1
fs.protected_hardlinks = 1
fs.protected_regular = 2
fs.protected_symlinks = 1
fs.file-max = 1048576 # sealos
net.bridge.bridge-nf-call-ip6tables = 1 # sealos
net.bridge.bridge-nf-call-iptables = 1 # sealos
net.core.somaxconn = 65535 # sealos
net.ipv4.conf.all.rp_filter = 0 # sealos
net.ipv4.ip_forward = 1 # sealos
net.ipv4.ip_local_port_range = 1024 65535 # sealos
net.ipv4.tcp_keepalive_intvl = 30 # sealos
net.ipv4.tcp_keepalive_time = 600 # sealos
net.ipv4.vs.conn_reuse_mode = 0 # sealos
net.ipv4.vs.conntrack = 1 # sealos
net.ipv6.conf.all.forwarding = 1 # sealos
vm.max_map_count = 2147483642 # sealos
fs.file-max = 1048576 # sealos
net.bridge.bridge-nf-call-ip6tables = 1 # sealos
net.bridge.bridge-nf-call-iptables = 1 # sealos
net.core.somaxconn = 65535 # sealos
net.ipv4.conf.all.rp_filter = 0 # sealos
net.ipv4.ip_forward = 1 # sealos
net.ipv4.ip_local_port_range = 1024 65535 # sealos
net.ipv4.tcp_keepalive_intvl = 30 # sealos
net.ipv4.tcp_keepalive_time = 600 # sealos
net.ipv4.vs.conn_reuse_mode = 0 # sealos
net.ipv4.vs.conntrack = 1 # sealos
net.ipv6.conf.all.forwarding = 1 # sealos
vm.max_map_count = 2147483642 # sealosINFO [2025-08-10 12:20:25] >> pull pause image sealos.hub:5000/pause:3.9
192.168.1.102:22        Firewall stopped and disabled on system startup
Image is up to date for sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
192.168.1.102:22        * Applying /usr/lib/sysctl.d/10-apparmor.conf ...
192.168.1.102:22        * Applying /etc/sysctl.d/10-bufferbloat.conf ...
192.168.1.102:22        * Applying /etc/sysctl.d/10-console-messages.conf ...
192.168.1.102:22        * Applying /etc/sysctl.d/10-ipv6-privacy.conf ...
192.168.1.102:22        * Applying /etc/sysctl.d/10-kernel-hardening.conf ...
192.168.1.102:22        * Applying /etc/sysctl.d/10-magic-sysrq.conf ...
192.168.1.102:22        * Applying /etc/sysctl.d/10-map-count.conf ...
192.168.1.102:22        * Applying /etc/sysctl.d/10-network-security.conf ...
192.168.1.102:22        * Applying /etc/sysctl.d/10-ptrace.conf ...
192.168.1.102:22        * Applying /etc/sysctl.d/10-zeropage.conf ...
192.168.1.102:22        * Applying /usr/lib/sysctl.d/30-tracker.conf ...
192.168.1.102:22        * Applying /usr/lib/sysctl.d/50-bubblewrap.conf ...
192.168.1.102:22        * Applying /usr/lib/sysctl.d/50-pid-max.conf ...
192.168.1.102:22        * Applying /usr/lib/sysctl.d/99-protect-links.conf ...
192.168.1.102:22        * Applying /etc/sysctl.d/99-sysctl.conf ...
192.168.1.102:22        * Applying /etc/sysctl.conf ...
192.168.1.102:22        kernel.apparmor_restrict_unprivileged_userns = 1
192.168.1.102:22        net.core.default_qdisc = fq_codel
192.168.1.102:22        kernel.printk = 4 4 1 7
192.168.1.102:22        net.ipv6.conf.all.use_tempaddr = 2
192.168.1.102:22        net.ipv6.conf.default.use_tempaddr = 2
192.168.1.102:22        kernel.kptr_restrict = 1
192.168.1.102:22        kernel.sysrq = 176
192.168.1.102:22        vm.max_map_count = 1048576
192.168.1.102:22        net.ipv4.conf.default.rp_filter = 2
192.168.1.102:22        net.ipv4.conf.all.rp_filter = 2
192.168.1.102:22        kernel.yama.ptrace_scope = 1
192.168.1.102:22        vm.mmap_min_addr = 65536
192.168.1.102:22        fs.inotify.max_user_watches = 65536
192.168.1.102:22        kernel.unprivileged_userns_clone = 1
192.168.1.102:22        kernel.pid_max = 4194304
192.168.1.102:22        fs.protected_fifos = 1
192.168.1.102:22        fs.protected_hardlinks = 1
192.168.1.102:22        fs.protected_regular = 2
192.168.1.102:22        fs.protected_symlinks = 1
192.168.1.102:22        fs.file-max = 1048576 # sealos
192.168.1.102:22        net.bridge.bridge-nf-call-ip6tables = 1 # sealos
192.168.1.102:22        net.bridge.bridge-nf-call-iptables = 1 # sealos
192.168.1.102:22        net.core.somaxconn = 65535 # sealos
192.168.1.102:22        net.ipv4.conf.all.rp_filter = 0 # sealos
192.168.1.102:22        net.ipv4.ip_forward = 1 # sealos
192.168.1.102:22        net.ipv4.ip_local_port_range = 1024 65535 # sealos
192.168.1.102:22        net.ipv4.tcp_keepalive_intvl = 30 # sealos
192.168.1.102:22        net.ipv4.tcp_keepalive_time = 600 # sealos
192.168.1.102:22        net.ipv4.vs.conn_reuse_mode = 0 # sealos
192.168.1.102:22        net.ipv4.vs.conntrack = 1 # sealos
192.168.1.102:22        net.ipv6.conf.all.forwarding = 1 # sealos
192.168.1.102:22        vm.max_map_count = 2147483642 # sealos
192.168.1.102:22        fs.file-max = 1048576 # sealos
192.168.1.102:22        net.bridge.bridge-nf-call-ip6tables = 1 # sealos
192.168.1.102:22        net.bridge.bridge-nf-call-iptables = 1 # sealos
192.168.1.102:22        net.core.somaxconn = 65535 # sealos
192.168.1.102:22        net.ipv4.conf.all.rp_filter = 0 # sealos
192.168.1.102:22        net.ipv4.ip_forward = 1 # sealos
192.168.1.102:22        net.ipv4.ip_local_port_range = 1024 65535 # sealos
192.168.1.102:22        net.ipv4.tcp_keepalive_intvl = 30 # sealos
192.168.1.102:22        net.ipv4.tcp_keepalive_time = 600 # sealos
192.168.1.102:22        net.ipv4.vs.conn_reuse_mode = 0 # sealos
192.168.1.102:22        net.ipv4.vs.conntrack = 1 # sealos
192.168.1.102:22        net.ipv6.conf.all.forwarding = 1 # sealos
192.168.1.102:22        vm.max_map_count = 2147483642 # sealos
192.168.1.103:22        Firewall stopped and disabled on system startup
192.168.1.102:22         INFO [2025-08-10 12:20:26] >> pull pause image sealos.hub:5000/pause:3.9
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /etc/systemd/system/kubelet.service.
192.168.1.103:22        * Applying /usr/lib/sysctl.d/10-apparmor.conf ...
192.168.1.103:22        * Applying /etc/sysctl.d/10-bufferbloat.conf ...
192.168.1.103:22        * Applying /etc/sysctl.d/10-console-messages.conf ...
192.168.1.103:22        * Applying /etc/sysctl.d/10-ipv6-privacy.conf ...
192.168.1.103:22        * Applying /etc/sysctl.d/10-kernel-hardening.conf ...
192.168.1.103:22        * Applying /etc/sysctl.d/10-magic-sysrq.conf ...
192.168.1.103:22        * Applying /etc/sysctl.d/10-map-count.conf ...
192.168.1.103:22        * Applying /etc/sysctl.d/10-network-security.conf ...
192.168.1.103:22        * Applying /etc/sysctl.d/10-ptrace.conf ...
192.168.1.103:22        * Applying /etc/sysctl.d/10-zeropage.conf ...
192.168.1.103:22        * Applying /usr/lib/sysctl.d/30-tracker.conf ...
192.168.1.103:22        * Applying /usr/lib/sysctl.d/50-bubblewrap.conf ...
192.168.1.103:22        * Applying /usr/lib/sysctl.d/50-pid-max.conf ...
192.168.1.103:22        * Applying /usr/lib/sysctl.d/99-protect-links.conf ...
192.168.1.103:22        * Applying /etc/sysctl.d/99-sysctl.conf ...
192.168.1.103:22        * Applying /etc/sysctl.conf ...
192.168.1.103:22        kernel.apparmor_restrict_unprivileged_userns = 1
192.168.1.103:22        net.core.default_qdisc = fq_codel
192.168.1.103:22        kernel.printk = 4 4 1 7
192.168.1.103:22        net.ipv6.conf.all.use_tempaddr = 2
192.168.1.103:22        net.ipv6.conf.default.use_tempaddr = 2
192.168.1.103:22        kernel.kptr_restrict = 1
192.168.1.103:22        kernel.sysrq = 176
192.168.1.103:22        vm.max_map_count = 1048576
192.168.1.103:22        net.ipv4.conf.default.rp_filter = 2
192.168.1.103:22        net.ipv4.conf.all.rp_filter = 2
192.168.1.103:22        kernel.yama.ptrace_scope = 1
192.168.1.103:22        vm.mmap_min_addr = 65536
192.168.1.103:22        fs.inotify.max_user_watches = 65536
192.168.1.103:22        kernel.unprivileged_userns_clone = 1
192.168.1.103:22        kernel.pid_max = 4194304
192.168.1.103:22        fs.protected_fifos = 1
192.168.1.103:22        fs.protected_hardlinks = 1
192.168.1.103:22        fs.protected_regular = 2
192.168.1.103:22        fs.protected_symlinks = 1
192.168.1.103:22        fs.file-max = 1048576 # sealos
192.168.1.103:22        net.bridge.bridge-nf-call-ip6tables = 1 # sealos
192.168.1.103:22        net.bridge.bridge-nf-call-iptables = 1 # sealos
192.168.1.103:22        net.core.somaxconn = 65535 # sealos
192.168.1.103:22        net.ipv4.conf.all.rp_filter = 0 # sealos
192.168.1.103:22        net.ipv4.ip_forward = 1 # sealos
192.168.1.103:22        net.ipv4.ip_local_port_range = 1024 65535 # sealos
192.168.1.103:22        net.ipv4.tcp_keepalive_intvl = 30 # sealos
192.168.1.103:22        net.ipv4.tcp_keepalive_time = 600 # sealos
192.168.1.103:22        net.ipv4.vs.conn_reuse_mode = 0 # sealos
192.168.1.103:22        net.ipv4.vs.conntrack = 1 # sealos
192.168.1.103:22        net.ipv6.conf.all.forwarding = 1 # sealos
192.168.1.103:22        vm.max_map_count = 2147483642 # sealos
192.168.1.103:22        fs.file-max = 1048576 # sealos
192.168.1.103:22        net.bridge.bridge-nf-call-ip6tables = 1 # sealos
192.168.1.103:22        net.bridge.bridge-nf-call-iptables = 1 # sealos
192.168.1.103:22        net.core.somaxconn = 65535 # sealos
192.168.1.103:22        net.ipv4.conf.all.rp_filter = 0 # sealos
192.168.1.103:22        net.ipv4.ip_forward = 1 # sealos
192.168.1.103:22        net.ipv4.ip_local_port_range = 1024 65535 # sealos
192.168.1.103:22        net.ipv4.tcp_keepalive_intvl = 30 # sealos
192.168.1.103:22        net.ipv4.tcp_keepalive_time = 600 # sealos
192.168.1.103:22        net.ipv4.vs.conn_reuse_mode = 0 # sealos
192.168.1.103:22        net.ipv4.vs.conntrack = 1 # sealos
192.168.1.103:22        net.ipv6.conf.all.forwarding = 1 # sealos
192.168.1.103:22        vm.max_map_count = 2147483642 # sealos
192.168.1.103:22         INFO [2025-08-10 12:20:32] >> pull pause image sealos.hub:5000/pause:3.9INFO [2025-08-10 12:20:26] >> init kubelet successINFO [2025-08-10 12:20:26] >> init rootfs success
192.168.1.102:22        Image is up to date for sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
192.168.1.102:22        Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /etc/systemd/system/kubelet.service.
192.168.1.103:22        Image is up to date for sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
192.168.1.102:22         INFO [2025-08-10 12:20:27] >> init kubelet success
192.168.1.102:22         INFO [2025-08-10 12:20:27] >> init rootfs success
192.168.1.103:22        Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /etc/systemd/system/kubelet.service.
192.168.1.103:22         INFO [2025-08-10 12:20:34] >> init kubelet success
192.168.1.103:22         INFO [2025-08-10 12:20:34] >> init rootfs success
2025-08-10T12:20:28 info Executing pipeline Init in CreateProcessor.
2025-08-10T12:20:28 info Copying kubeadm config to master0
2025-08-10T12:20:28 info start to generate cert and kubeConfig...
2025-08-10T12:20:28 info start to generate and copy certs to masters...
2025-08-10T12:20:28 info apiserver altNames : {map[apiserver.cluster.local:apiserver.cluster.local kubernetes:kubernetes kubernetes.default:kubernetes.default kubernetes.default.svc:kubernetes.default.svc kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local localhost:localhost master01:master01] map[10.103.97.2:10.103.97.2 10.96.0.1:10.96.0.1 127.0.0.1:127.0.0.1 192.168.1.98:192.168.1.98]}
2025-08-10T12:20:28 info Etcd altnames : {map[localhost:localhost master01:master01] map[127.0.0.1:127.0.0.1 192.168.1.98:192.168.1.98 ::1:::1]}, commonName : master01
2025-08-10T12:20:30 info start to copy etc pki files to masters
2025-08-10T12:20:30 info start to create kubeconfig...
2025-08-10T12:20:30 info start to copy kubeconfig files to masters
2025-08-10T12:20:30 info start to copy static files to masters
2025-08-10T12:20:30 info start to init master0...
[config/images] Pulled registry.k8s.io/kube-apiserver:v1.29.9
[config/images] Pulled registry.k8s.io/kube-controller-manager:v1.29.9
[config/images] Pulled registry.k8s.io/kube-scheduler:v1.29.9
[config/images] Pulled registry.k8s.io/kube-proxy:v1.29.9
[config/images] Pulled registry.k8s.io/coredns/coredns:v1.11.1
[config/images] Pulled registry.k8s.io/pause:3.9
[config/images] Pulled registry.k8s.io/etcd:3.5.15-0
W0810 12:20:39.357353    8594 utils.go:69] The recommended value for "healthzBindAddress" in "KubeletConfiguration" is: 127.0.0.1; the provided value is: 0.0.0.0
[init] Using Kubernetes version: v1.29.9
[preflight] Running pre-flight checks[WARNING FileExisting-socat]: socat not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0810 12:20:39.455337    8594 checks.go:835] detected that the sandbox image "sealos.hub:5000/pause:3.9" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
W0810 12:20:40.239475    8594 kubeconfig.go:273] a kubeconfig file "/etc/kubernetes/controller-manager.conf" exists already but has an unexpected API Server URL: expected: https://192.168.1.98:6443, got: https://apiserver.cluster.local:6443
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
W0810 12:20:40.383648    8594 kubeconfig.go:273] a kubeconfig file "/etc/kubernetes/scheduler.conf" exists already but has an unexpected API Server URL: expected: https://192.168.1.98:6443, got: https://apiserver.cluster.local:6443
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 4.001602 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master01 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:kubeadm join apiserver.cluster.local:6443 --token <value withheld> \--discovery-token-ca-cert-hash sha256:957a2a9cbc2a717e819cd5108c7415c577baadccd95f01963d1be9a2357e1736 \--control-plane --certificate-key <value withheld>Then you can join any number of worker nodes by running the following on each as root:kubeadm join apiserver.cluster.local:6443 --token <value withheld> \--discovery-token-ca-cert-hash sha256:957a2a9cbc2a717e819cd5108c7415c577baadccd95f01963d1be9a2357e1736
2025-08-10T12:20:46 info Executing pipeline Join in CreateProcessor.
2025-08-10T12:20:46 info [192.168.1.102:22 192.168.1.103:22] will be added as worker
2025-08-10T12:20:46 info start to get kubernetes token...
2025-08-10T12:20:46 info fetch certSANs from kubeadm configmap
2025-08-10T12:20:46 info start to join 192.168.1.103:22 as worker
2025-08-10T12:20:46 info start to copy kubeadm join config to node: 192.168.1.103:22
2025-08-10T12:20:46 info start to join 192.168.1.102:22 as worker
2025-08-10T12:20:47 info run ipvs once module: 192.168.1.103:221/1, 643 it/s)
2025-08-10T12:20:47 info start to copy kubeadm join config to node: 192.168.1.102:22
192.168.1.103:22        2025-08-10T12:20:53 info Trying to add route
192.168.1.103:22        2025-08-10T12:20:53 info success to set route.(host:10.103.97.2, gateway:192.168.1.103)
2025-08-10T12:20:47 info start join node: 192.168.1.103:22
192.168.1.103:22        [preflight] Running pre-flight checks
192.168.1.103:22                [WARNING FileExisting-socat]: socat not found in system path
2025-08-10T12:20:47 info run ipvs once module: 192.168.1.102:221/1, 728 it/s)
192.168.1.102:22        2025-08-10T12:20:48 info Trying to add route
192.168.1.102:22        2025-08-10T12:20:48 info success to set route.(host:10.103.97.2, gateway:192.168.1.102)
2025-08-10T12:20:48 info start join node: 192.168.1.102:22
192.168.1.102:22        [preflight] Running pre-flight checks
192.168.1.102:22                [WARNING FileExisting-socat]: socat not found in system path
192.168.1.102:22        [preflight] Reading configuration from the cluster...
192.168.1.102:22        [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
192.168.1.102:22        W0810 12:21:00.215357    9534 utils.go:69] The recommended value for "healthzBindAddress" in "KubeletConfiguration" is: 127.0.0.1; the provided value is: 0.0.0.0
192.168.1.102:22        [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
192.168.1.102:22        [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
192.168.1.102:22        [kubelet-start] Starting the kubelet
192.168.1.102:22        [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
192.168.1.102:22
192.168.1.102:22        This node has joined the cluster:
192.168.1.102:22        * Certificate signing request was sent to apiserver and a response was received.
192.168.1.102:22        * The Kubelet was informed of the new secure connection details.
192.168.1.102:22
192.168.1.102:22        Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
192.168.1.102:22
2025-08-10T12:21:02 info succeeded in joining 192.168.1.102:22 as worker
192.168.1.103:22        [preflight] Reading configuration from the cluster...
192.168.1.103:22        [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
192.168.1.103:22        W0810 12:21:11.756483    6695 utils.go:69] The recommended value for "healthzBindAddress" in "KubeletConfiguration" is: 127.0.0.1; the provided value is: 0.0.0.0
192.168.1.103:22        [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
192.168.1.103:22        [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
192.168.1.103:22        [kubelet-start] Starting the kubelet
192.168.1.103:22        [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
192.168.1.103:22
192.168.1.103:22        This node has joined the cluster:
192.168.1.103:22        * Certificate signing request was sent to apiserver and a response was received.
192.168.1.103:22        * The Kubelet was informed of the new secure connection details.
192.168.1.103:22
192.168.1.103:22        Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
192.168.1.103:22
2025-08-10T12:21:07 info succeeded in joining 192.168.1.103:22 as worker
2025-08-10T12:21:07 info start to sync lvscare static pod to node: 192.168.1.103:22 master: [192.168.1.98:6443]
2025-08-10T12:21:07 info start to sync lvscare static pod to node: 192.168.1.102:22 master: [192.168.1.98:6443]
192.168.1.103:22        2025-08-10T12:21:14 info generator lvscare static pod is success
192.168.1.102:22        2025-08-10T12:21:08 info generator lvscare static pod is success
2025-08-10T12:21:08 info Executing pipeline RunGuest in CreateProcessor.
ℹ️  Using Cilium version 1.13.4
🔮 Auto-detected cluster name: kubernetes
🔮 Auto-detected datapath mode: tunnel
🔮 Auto-detected kube-proxy has been installed
2025-08-10T12:21:09 info succeeded in creating a new cluster, enjoy it!
2025-08-10T12:21:09 info___           ___           ___           ___       ___           ___/\  \         /\  \         /\  \         /\__\     /\  \         /\  \/::\  \       /::\  \       /::\  \       /:/  /    /::\  \       /::\  \/:/\ \  \     /:/\:\  \     /:/\:\  \     /:/  /    /:/\:\  \     /:/\ \  \_\:\~\ \  \   /::\~\:\  \   /::\~\:\  \   /:/  /    /:/  \:\  \   _\:\~\ \  \/\ \:\ \ \__\ /:/\:\ \:\__\ /:/\:\ \:\__\ /:/__/    /:/__/ \:\__\ /\ \:\ \ \__\\:\ \:\ \/__/ \:\~\:\ \/__/ \/__\:\/:/  / \:\  \    \:\  \ /:/  / \:\ \:\ \/__/\:\ \:\__\    \:\ \:\__\        \::/  /   \:\  \    \:\  /:/  /   \:\ \:\__\\:\/:/  /     \:\ \/__/        /:/  /     \:\  \    \:\/:/  /     \:\/:/  /\::/  /       \:\__\         /:/  /       \:\__\    \::/  /       \::/  /\/__/         \/__/         \/__/         \/__/     \/__/         \/__/Website: https://www.sealos.io/Address: github.com/labring/sealosVersion: 5.0.1-2b74a1281root@master01:~# 

四、查看k8s服务状态

(1)查看使用到的镜像
root@master01:~# sealos images
REPOSITORY                                             TAG       IMAGE ID       CREATED        SIZE
registry.cn-shanghai.aliyuncs.com/labring/kubernetes   v1.29.9   bca192f35556   3 months ago   669 MB
registry.cn-shanghai.aliyuncs.com/labring/cilium       v1.13.4   71aa52ad0a11   2 years ago    483 MB
registry.cn-shanghai.aliyuncs.com/labring/helm         v3.9.4    3376f6822067   2 years ago    46.4 MB
root@master01:~#(2)查看节点状态
root@master01:~# kubectl get nodes
NAME       STATUS   ROLES           AGE     VERSION
master01   Ready    control-plane   3m42s   v1.29.9
node1      Ready    <none>          3m24s   v1.29.9
node2      Ready    <none>          3m19s   v1.29.9
root@master01:~#(3)查看pod状态
root@master01:~# kubectl get pod -A
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system   cilium-2gdqf                       1/1     Running   0          3m29s
kube-system   cilium-operator-6946ccbcc5-9w275   1/1     Running   0          3m29s
kube-system   cilium-pmhgr                       1/1     Running   0          3m29s
kube-system   cilium-wnp9r                       1/1     Running   0          3m29s
kube-system   coredns-76f75df574-nf7bd           1/1     Running   0          3m39s
kube-system   coredns-76f75df574-s89vx           1/1     Running   0          3m39s
kube-system   etcd-master01                      1/1     Running   0          3m52s
kube-system   kube-apiserver-master01            1/1     Running   0          3m54s
kube-system   kube-controller-manager-master01   1/1     Running   0          3m53s
kube-system   kube-proxy-6mlkb                   1/1     Running   0          3m39s
kube-system   kube-proxy-7jx96                   1/1     Running   0          3m32s
kube-system   kube-proxy-9k92l                   1/1     Running   0          3m37s
kube-system   kube-scheduler-master01            1/1     Running   0          3m52s
kube-system   kube-sealos-lvscare-node1          1/1     Running   0          3m17s
kube-system   kube-sealos-lvscare-node2          1/1     Running   0          3m12s
root@master01:~#(4)查看证书有效期
root@master01:~# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0810 12:26:42.829869   12731 utils.go:69] The recommended value for "healthzBindAddress" in "KubeletConfiguration" is: 127.0.0.1; the provided value is: 0.0.0.0CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Jul 17, 2125 04:20 UTC   99y             ca                      no
apiserver                  Jul 17, 2125 04:20 UTC   99y             ca                      no
apiserver-etcd-client      Jul 17, 2125 04:20 UTC   99y             etcd-ca                 no
apiserver-kubelet-client   Jul 17, 2125 04:20 UTC   99y             ca                      no
controller-manager.conf    Jul 17, 2125 04:20 UTC   99y             ca                      no
etcd-healthcheck-client    Jul 17, 2125 04:20 UTC   99y             etcd-ca                 no
etcd-peer                  Jul 17, 2125 04:20 UTC   99y             etcd-ca                 no
etcd-server                Jul 17, 2125 04:20 UTC   99y             etcd-ca                 no
front-proxy-client         Jul 17, 2125 04:20 UTC   99y             front-proxy-ca          no
scheduler.conf             Jul 17, 2125 04:20 UTC   99y             ca                      no
super-admin.conf           Aug 10, 2026 04:20 UTC   364d            ca                      noCERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Jul 17, 2125 04:20 UTC   99y             no
etcd-ca                 Jul 17, 2125 04:20 UTC   99y             no
front-proxy-ca          Jul 17, 2125 04:20 UTC   99y             no
root@master01:~#

五、在线添加node节点,ip为192.168.1.104

root@master01:~# sealos add --nodes 192.168.1.104
2025-08-10T14:55:12 info start to scale this cluster
2025-08-10T14:55:12 info Executing pipeline JoinCheck in ScaleProcessor.
2025-08-10T14:55:12 info checker:hostname [192.168.1.98:22 192.168.1.104:22]
2025-08-10T14:55:12 info checker:timeSync [192.168.1.98:22 192.168.1.104:22]
2025-08-10T14:55:13 info checker:containerd [192.168.1.104:22]
2025-08-10T14:55:13 info Executing pipeline PreProcess in ScaleProcessor.
2025-08-10T14:55:13 info Executing pipeline PreProcessImage in ScaleProcessor.
2025-08-10T14:55:13 info Executing pipeline RunConfig in ScaleProcessor.
2025-08-10T14:55:13 info Executing pipeline MountRootfs in ScaleProcessor.
192.168.1.104:22es to 192025-08-10T14:55:51 info render /var/lib/sealos/data/default/rootfs/etc/config.toml from /var/lib/sealos/data/default/rootfs/etc/config.toml.tmpl completed
192.168.1.104:22        2025-08-10T14:55:51 info render /var/lib/sealos/data/default/rootfs/etc/containerd.service from /var/lib/sealos/data/default/rootfs/etc/containerd.service.tmpl completed
192.168.1.104:22        2025-08-10T14:55:51 info render /var/lib/sealos/data/default/rootfs/etc/hosts.toml from /var/lib/sealos/data/default/rootfs/etc/hosts.toml.tmpl completed
192.168.1.104:22        2025-08-10T14:55:51 info render /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.service from /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.service.tmpl completed
192.168.1.104:22        2025-08-10T14:55:51 info render /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.yaml from /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.yaml.tmpl completed
192.168.1.104:22        2025-08-10T14:55:51 info render /var/lib/sealos/data/default/rootfs/etc/kubelet.service from /var/lib/sealos/data/default/rootfs/etc/kubelet.service.tmpl completed
192.168.1.104:22        2025-08-10T14:55:51 info render /var/lib/sealos/data/default/rootfs/etc/registry.service from /var/lib/sealos/data/default/rootfs/etc/registry.service.tmpl completed
192.168.1.104:22        2025-08-10T14:55:51 info render /var/lib/sealos/data/default/rootfs/etc/registry.yml from /var/lib/sealos/data/default/rootfs/etc/registry.yml.tmpl completed
192.168.1.104:22        2025-08-10T14:55:51 info render /var/lib/sealos/data/default/rootfs/etc/registry_config.yml from /var/lib/sealos/data/default/rootfs/etc/registry_config.yml.tmpl completed
192.168.1.104:22        2025-08-10T14:55:51 info render /var/lib/sealos/data/default/rootfs/etc/systemd/system/kubelet.service.d/10-kubeadm.conf from /var/lib/sealos/data/default/rootfs/etc/systemd/system/kubelet.service.d/10-kubeadm.conf.tmpl completed
2025-08-10T14:55:46 info Executing pipeline Bootstrap in ScaleProcessor
192.168.1.104:22         INFO [2025-08-10 14:55:51] >> Check port kubelet port 10249..10259, reserved port 5050..5054 inuse. Please wait...
192.168.1.104:22         INFO [2025-08-10 14:55:53] >> check root,port,cri success
192.168.1.104:22        2025-08-10T14:55:53 info domain sealos.hub:192.168.1.98 append success
192.168.1.104:22        2025-08-10T14:55:53 info domain apiserver.cluster.local:10.103.97.2 append success
192.168.1.104:22        2025-08-10T14:55:54 info domain lvscare.node.ip:192.168.1.104 append success
192.168.1.104:22        Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /etc/systemd/system/containerd.service.
192.168.1.104:22         INFO [2025-08-10 14:55:59] >> Health check containerd!
192.168.1.104:22         INFO [2025-08-10 14:55:59] >> containerd is running
192.168.1.104:22         INFO [2025-08-10 14:55:59] >> init containerd success
192.168.1.104:22        Created symlink /etc/systemd/system/multi-user.target.wants/image-cri-shim.service → /etc/systemd/system/image-cri-shim.service.
192.168.1.104:22         INFO [2025-08-10 14:56:00] >> Health check image-cri-shim!
192.168.1.104:22         INFO [2025-08-10 14:56:00] >> image-cri-shim is running
192.168.1.104:22         INFO [2025-08-10 14:56:00] >> init shim success
192.168.1.104:22        127.0.0.1 localhost
192.168.1.104:22        ::1     ip6-localhost ip6-loopback
192.168.1.104:22        Firewall stopped and disabled on system startup
192.168.1.104:22        * Applying /usr/lib/sysctl.d/10-apparmor.conf ...
192.168.1.104:22        * Applying /etc/sysctl.d/10-bufferbloat.conf ...
192.168.1.104:22        * Applying /etc/sysctl.d/10-console-messages.conf ...
192.168.1.104:22        * Applying /etc/sysctl.d/10-ipv6-privacy.conf ...
192.168.1.104:22        * Applying /etc/sysctl.d/10-kernel-hardening.conf ...
192.168.1.104:22        * Applying /etc/sysctl.d/10-magic-sysrq.conf ...
192.168.1.104:22        * Applying /etc/sysctl.d/10-map-count.conf ...
192.168.1.104:22        * Applying /etc/sysctl.d/10-network-security.conf ...
192.168.1.104:22        * Applying /etc/sysctl.d/10-ptrace.conf ...
192.168.1.104:22        * Applying /etc/sysctl.d/10-zeropage.conf ...
192.168.1.104:22        * Applying /usr/lib/sysctl.d/30-tracker.conf ...
192.168.1.104:22        * Applying /usr/lib/sysctl.d/50-bubblewrap.conf ...
192.168.1.104:22        * Applying /usr/lib/sysctl.d/50-pid-max.conf ...
192.168.1.104:22        * Applying /usr/lib/sysctl.d/99-protect-links.conf ...
192.168.1.104:22        * Applying /etc/sysctl.d/99-sysctl.conf ...
192.168.1.104:22        * Applying /etc/sysctl.conf ...
192.168.1.104:22        kernel.apparmor_restrict_unprivileged_userns = 1
192.168.1.104:22        net.core.default_qdisc = fq_codel
192.168.1.104:22        kernel.printk = 4 4 1 7
192.168.1.104:22        net.ipv6.conf.all.use_tempaddr = 2
192.168.1.104:22        net.ipv6.conf.default.use_tempaddr = 2
192.168.1.104:22        kernel.kptr_restrict = 1
192.168.1.104:22        kernel.sysrq = 176
192.168.1.104:22        vm.max_map_count = 1048576
192.168.1.104:22        net.ipv4.conf.default.rp_filter = 2
192.168.1.104:22        net.ipv4.conf.all.rp_filter = 2
192.168.1.104:22        kernel.yama.ptrace_scope = 1
192.168.1.104:22        vm.mmap_min_addr = 65536
192.168.1.104:22        fs.inotify.max_user_watches = 65536
192.168.1.104:22        kernel.unprivileged_userns_clone = 1
192.168.1.104:22        kernel.pid_max = 4194304
192.168.1.104:22        fs.protected_fifos = 1
192.168.1.104:22        fs.protected_hardlinks = 1
192.168.1.104:22        fs.protected_regular = 2
192.168.1.104:22        fs.protected_symlinks = 1
192.168.1.104:22        fs.file-max = 1048576 # sealos
192.168.1.104:22        net.bridge.bridge-nf-call-ip6tables = 1 # sealos
192.168.1.104:22        net.bridge.bridge-nf-call-iptables = 1 # sealos
192.168.1.104:22        net.core.somaxconn = 65535 # sealos
192.168.1.104:22        net.ipv4.conf.all.rp_filter = 0 # sealos
192.168.1.104:22        net.ipv4.ip_forward = 1 # sealos
192.168.1.104:22        net.ipv4.ip_local_port_range = 1024 65535 # sealos
192.168.1.104:22        net.ipv4.tcp_keepalive_intvl = 30 # sealos
192.168.1.104:22        net.ipv4.tcp_keepalive_time = 600 # sealos
192.168.1.104:22        net.ipv4.vs.conn_reuse_mode = 0 # sealos
192.168.1.104:22        net.ipv4.vs.conntrack = 1 # sealos
192.168.1.104:22        net.ipv6.conf.all.forwarding = 1 # sealos
192.168.1.104:22        vm.max_map_count = 2147483642 # sealos
192.168.1.104:22        fs.file-max = 1048576 # sealos
192.168.1.104:22        net.bridge.bridge-nf-call-ip6tables = 1 # sealos
192.168.1.104:22        net.bridge.bridge-nf-call-iptables = 1 # sealos
192.168.1.104:22        net.core.somaxconn = 65535 # sealos
192.168.1.104:22        net.ipv4.conf.all.rp_filter = 0 # sealos
192.168.1.104:22        net.ipv4.ip_forward = 1 # sealos
192.168.1.104:22        net.ipv4.ip_local_port_range = 1024 65535 # sealos
192.168.1.104:22        net.ipv4.tcp_keepalive_intvl = 30 # sealos
192.168.1.104:22        net.ipv4.tcp_keepalive_time = 600 # sealos
192.168.1.104:22        net.ipv4.vs.conn_reuse_mode = 0 # sealos
192.168.1.104:22        net.ipv4.vs.conntrack = 1 # sealos
192.168.1.104:22        net.ipv6.conf.all.forwarding = 1 # sealos
192.168.1.104:22        vm.max_map_count = 2147483642 # sealos
192.168.1.104:22         INFO [2025-08-10 14:56:03] >> pull pause image sealos.hub:5000/pause:3.9
192.168.1.104:22        Image is up to date for sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
192.168.1.104:22        Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /etc/systemd/system/kubelet.service.
192.168.1.104:22         INFO [2025-08-10 14:56:05] >> init kubelet success
192.168.1.104:22         INFO [2025-08-10 14:56:05] >> init rootfs success
2025-08-10T14:56:00 info Executing pipeline Join in ScaleProcessor.
2025-08-10T14:56:00 info [192.168.1.104:22] will be added as worker
2025-08-10T14:56:00 info start to get kubernetes token...
2025-08-10T14:56:01 info fetch certSANs from kubeadm configmap
2025-08-10T14:56:01 info start to join 192.168.1.104:22 as worker
2025-08-10T14:56:01 info start to copy kubeadm join config to node: 192.168.1.104:22
2025-08-10T14:56:02 info run ipvs once module: 192.168.1.104:221/1, 186 it/s)
192.168.1.104:22        2025-08-10T14:56:07 info Trying to add route
192.168.1.104:22        2025-08-10T14:56:07 info success to set route.(host:10.103.97.2, gateway:192.168.1.104)
2025-08-10T14:56:02 info start join node: 192.168.1.104:22
192.168.1.104:22        [preflight] Running pre-flight checks
192.168.1.104:22                [WARNING FileExisting-socat]: socat not found in system path
192.168.1.104:22        [preflight] Reading configuration from the cluster...
192.168.1.104:22        [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
192.168.1.104:22        W0810 14:56:08.112112    6085 common.go:200] WARNING: could not obtain a bind address for the API Server: no default routes found in "/proc/net/route" or "/proc/net/ipv6_route"; using: 0.0.0.0
192.168.1.104:22        W0810 14:56:08.112331    6085 utils.go:69] The recommended value for "healthzBindAddress" in "KubeletConfiguration" is: 127.0.0.1; the provided value is: 0.0.0.0
192.168.1.104:22        [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
192.168.1.104:22        [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
192.168.1.104:22        [kubelet-start] Starting the kubelet
192.168.1.104:22        [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
192.168.1.104:22
192.168.1.104:22        This node has joined the cluster:
192.168.1.104:22        * Certificate signing request was sent to apiserver and a response was received.
192.168.1.104:22        * The Kubelet was informed of the new secure connection details.
192.168.1.104:22
192.168.1.104:22        Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
192.168.1.104:22
2025-08-10T14:56:05 info succeeded in joining 192.168.1.104:22 as worker
2025-08-10T14:56:05 info start to sync lvscare static pod to node: 192.168.1.104:22 master: [192.168.1.98:6443]
192.168.1.104:22        2025-08-10T14:56:11 info generator lvscare static pod is success
2025-08-10T14:56:06 info Executing pipeline RunGuest in ScaleProcessor.
2025-08-10T14:56:07 info succeeded in scaling this cluster
2025-08-10T14:56:07 info___           ___           ___           ___       ___           ___/\  \         /\  \         /\  \         /\__\     /\  \         /\  \/::\  \       /::\  \       /::\  \       /:/  /    /::\  \       /::\  \/:/\ \  \     /:/\:\  \     /:/\:\  \     /:/  /    /:/\:\  \     /:/\ \  \_\:\~\ \  \   /::\~\:\  \   /::\~\:\  \   /:/  /    /:/  \:\  \   _\:\~\ \  \/\ \:\ \ \__\ /:/\:\ \:\__\ /:/\:\ \:\__\ /:/__/    /:/__/ \:\__\ /\ \:\ \ \__\\:\ \:\ \/__/ \:\~\:\ \/__/ \/__\:\/:/  / \:\  \    \:\  \ /:/  / \:\ \:\ \/__/\:\ \:\__\    \:\ \:\__\        \::/  /   \:\  \    \:\  /:/  /   \:\ \:\__\\:\/:/  /     \:\ \/__/        /:/  /     \:\  \    \:\/:/  /     \:\/:/  /\::/  /       \:\__\         /:/  /       \:\__\    \::/  /       \::/  /\/__/         \/__/         \/__/         \/__/     \/__/         \/__/Website: https://www.sealos.io/Address: github.com/labring/sealosVersion: 5.0.1-2b74a1281root@master01:~# root@master01:~# kubectl get nodes
NAME       STATUS   ROLES           AGE    VERSION
master01   Ready    control-plane   156m   v1.29.9
node03     Ready    <none>          86s    v1.29.9
node1      Ready    <none>          156m   v1.29.9
node2      Ready    <none>          156m   v1.29.9
root@master01:~#

六、安装sealos集群,实现k8s 图形化paas服务

(1)下载sealos-cloud镜像,并上传到master01节点
注:直接去拉阿里云的镜像,会报权限问题
docker pull docker.io/labring/sealos-cloud:latest(2)打包镜像
docker save -o sealos-cloud.tar docker.io/labring/sealos-cloud:latest(3)上传sealos-cloud.tar到/home/test目录,并使用sealos load导入镜像
root@master01:~# sealos load -i /home/test/sealos-cloud.tar
Getting image source signatures
Copying blob b63eb4a8e470 done
Copying config 8f15d6df44 done
Writing manifest to image destination
Storing signatures
Loaded image: docker.io/labring/sealos-cloud:latest
root@master01:~# sealos images
REPOSITORY                                             TAG       IMAGE ID       CREATED        SIZE
registry.cn-shanghai.aliyuncs.com/labring/kubernetes   v1.29.9   bca192f35556   3 months ago   669 MB
docker.io/labring/sealos-cloud                         latest    8f15d6df448e   7 months ago   1.46 GB
registry.cn-shanghai.aliyuncs.com/labring/cilium       v1.13.4   71aa52ad0a11   2 years ago    483 MB
registry.cn-shanghai.aliyuncs.com/labring/helm         v3.9.4    3376f6822067   2 years ago    46.4 MB
root@master01:~# sealos tag docker.io/labring/sealos-cloud:latest registry.cn-shanghai.aliyuncs.com/labring/sealos-cloud:latest
root@master01:~# sealos images
REPOSITORY                                               TAG       IMAGE ID       CREATED        SIZE
registry.cn-shanghai.aliyuncs.com/labring/kubernetes     v1.29.9   bca192f35556   3 months ago   669 MB
docker.io/labring/sealos-cloud                           latest    8f15d6df448e   7 months ago   1.46 GB
registry.cn-shanghai.aliyuncs.com/labring/sealos-cloud   latest    8f15d6df448e   7 months ago   1.46 GB
registry.cn-shanghai.aliyuncs.com/labring/cilium         v1.13.4   71aa52ad0a11   2 years ago    483 MB
registry.cn-shanghai.aliyuncs.com/labring/helm           v3.9.4    3376f6822067   2 years ago    46.4 MB
root@master01:~#

七、安装kubeblocks

(1)安装snapshot

(1)安装snapshot
kubectl get crd volumesnapshotclasses.snapshot.storage.k8s.io
kubectl get crd volumesnapshots.snapshot.storage.k8s.io
kubectl get crd volumesnapshotcontents.snapshot.storage.k8s.iokubectl create -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v8.2.0/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v8.2.0/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml
kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v8.2.0/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yamlroot@master01:~# kubectl get crd volumesnapshotclasses.snapshot.storage.k8s.io
NAME                                            CREATED AT
volumesnapshotclasses.snapshot.storage.k8s.io   2025-08-10T07:31:06Z
root@master01:~# kubectl get crd volumesnapshots.snapshot.storage.k8s.io
NAME                                      CREATED AT
volumesnapshots.snapshot.storage.k8s.io   2025-08-10T07:31:07Z
root@master01:~# kubectl get crd volumesnapshotcontents.snapshot.storage.k8s.io
NAME                                             CREATED AT
volumesnapshotcontents.snapshot.storage.k8s.io   2025-08-10T07:31:08Z
root@master01:~#
1.2 部署快照控制器
root@master01:~# helm repo add piraeus-charts https://piraeus.io/helm-charts/
"piraeus-charts" has been added to your repositories
root@master01:~# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "piraeus-charts" chart repository
Update Complete. ⎈Happy Helming!⎈
root@master01:~# helm install snapshot-controller piraeus-charts/snapshot-controller -n kb-system --create-namespace
NAME: snapshot-controller
LAST DEPLOYED: Sun Aug 10 15:35:25 2025
NAMESPACE: kb-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Volume Snapshot Controller installed.If you already have volume snapshots deployed using a CRDs before v1, you should
verify that the existing snapshots are upgradable to v1 CRDs. The snapshot controller (>= v3.0.0)
will label any invalid snapshots it can find. Use the following commands to find any invalid snapshotkubectl get volumesnapshots --selector=snapshot.storage.kubernetes.io/invalid-snapshot-resource="" --all-namespaces
kubectl get volumesnapshotcontents --selector=snapshot.storage.kubernetes.io/invalid-snapshot-resource="" --all-namespacesIf the above commands return any items, you need to remove them before upgrading to the newer v1 CRDs.
root@master01:~#


1.3 验证部署

root@master01:~# kubectl create -f https://ghfast.top/https://github.com/apecloud/kubeblocks/releases/download/v1.0.0/kubeblocks_crds.yaml
customresourcedefinition.apiextensions.k8s.io/clusterdefinitions.apps.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/clusters.apps.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/componentdefinitions.apps.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/components.apps.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/componentversions.apps.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/configconstraints.apps.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/configurations.apps.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/servicedescriptors.apps.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/shardingdefinitions.apps.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/sidecardefinitions.apps.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/actionsets.dataprotection.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/backuppolicies.dataprotection.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/backuppolicytemplates.dataprotection.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/backuprepos.dataprotection.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/backups.dataprotection.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/backupschedules.dataprotection.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/restores.dataprotection.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/storageproviders.dataprotection.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/nodecountscalers.experimental.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/addons.extensions.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/opsdefinitions.operations.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/opsrequests.operations.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/componentparameters.parameters.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/paramconfigrenderers.parameters.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/parameters.parameters.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/parametersdefinitions.parameters.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/reconciliationtraces.trace.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/instancesets.workloads.kubeblocks.io created
root@master01:~#

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。
如若转载,请注明出处:http://www.pswp.cn/news/918359.shtml
繁体地址,请注明出处:http://hk.pswp.cn/news/918359.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

nginx+Lua环境集成、nginx+Lua应用

nginxluaredis实践 概述 nginx、lua访问redis的三种方式&#xff1a; 1。 HttpRedis模块。 指令少&#xff0c;功能单一 &#xff0c;适合简单的缓存。只支持get 、select命令。 2。 HttpRedis2Module模块。 功能强大&#xff0c;比较灵活。 3。 lua-resty-redis库 OpenResty。…

机器学习 K-Means聚类 无监督学习

目录 K-Means 聚类&#xff1a;从原理到实践的完整指南 什么是 K-Means 聚类&#xff1f; 应用场景举例 K-Means 算法的核心原理 K-Means 算法的步骤详解 可视化理解 K-Means 的优缺点分析 优点 缺点 如何选择合适的 K 值&#xff1f; 1. 肘部法&#xff08;Elbow Me…

RabbitMQ面试精讲 Day 16:生产者优化策略与实践

【RabbitMQ面试精讲 Day 16】生产者优化策略与实践 开篇 欢迎来到"RabbitMQ面试精讲"系列第16天&#xff0c;今天我们聚焦RabbitMQ生产者优化策略与实践。在消息队列系统中&#xff0c;生产者的性能表现直接影响整个系统的吞吐量和可靠性。掌握生产者优化技巧不仅能…

Android 系统的安全 和 三星安全的区别

维度Android&#xff08;AOSP 通用&#xff09;Samsung&#xff08;Knox 强化&#xff09;本质差异一句话信任根标准 Verified Boot&#xff08;公钥由谷歌或 OEM 托管&#xff09;额外在 自家 SoC 里烧录 Knox 密钥 熔丝位&#xff0c;一旦解锁即触发 Knox 0x1 熔断&#xff…

开源大模型实战:GPT-OSS本地部署与全面测评

文章目录一、引言二、安装Ollama三、Linux部署GPT-OSS-20B模型四、模型测试4.1 AI幻觉检测题题目1&#xff1a;虚假历史事件题目2&#xff1a;不存在的科学概念题目3&#xff1a;虚构的地理信息题目4&#xff1a;错误的数学常识题目5&#xff1a;虚假的生物学事实4.2 算法题测试…

【无标题】命名管道(Named Pipe)是一种在操作系统中用于**进程间通信(IPC)** 的机制

命名管道&#xff08;Named Pipe&#xff09;是一种在操作系统中用于进程间通信&#xff08;IPC&#xff09; 的机制&#xff0c;它允许不相关的进程&#xff08;甚至不同用户的进程&#xff09;通过一个可见的文件系统路径进行数据交换。与匿名管道&#xff08;仅存在于内存&a…

Baumer相机如何通过YoloV8深度学习模型实现危险区域人员的实时检测识别(C#代码UI界面版)

《------往期经典推荐------》 AI应用软件开发实战专栏【链接】 序号 项目名称 项目名称 1 1.工业相机 + YOLOv8 实现人物检测识别:(C#代码,UI界面版) 2.工业相机 + YOLOv8 实现PCB的缺陷检测:(C#代码,UI界面版) 2 3.工业相机 + YOLOv8 实现动物分类识别:(C#代码,U…

本文章分享一个本地录音和实时传输录音给app的功能(杰理)

我用的是杰理手表sdk&#xff0c;该功能学会就可自行在任何杰里sdk上做&#xff0c;库函数大致一样&#xff0c;学会运用这个方向就好。1.我们要验证这个喇叭和麦是否正常最简单的的办法&#xff0c;就是直接万用表测试&#xff0c;直接接正负极&#xff0c;看看是否通路&#…

Netty-Rest搭建笔记

0.相关知识Component、Repository、ServiceRepository //Scope设置bean的作用范围 Scope("singleton")//单例 prototype每次创建都会给一个新实例。 public class BookDaoImpl implements BookDao { //生命周期public void save() {System.out.println("book d…

工作笔记-----lwip网络任务初始化问题排查

工作笔记-----基于FreeRTOS的lwIP网络任务初始化问题排查 Author&#xff1a;明月清了个风Date&#xff1a; 2025/8/10PS&#xff1a;新项目中在STMF7开发板上基于freeRTOS和lwIP开发网口相关任务&#xff0c;开发过程中遇到了网口无法连接的问题&#xff0c;进行了一系列的排查…

Kotlin动态代理池+无头浏览器协程化实战

我看到了很多作者展示了Kotlin在爬虫领域的各种高级用法。我需要从中提取出最"牛叉"的操作&#xff0c;也就是那些充分利用Kotlin语言特性&#xff0c;使爬虫开发更高效、更强大的技巧。 我准备用几个主要部分来组织内容&#xff0c;每个部分会突出Kotlin特有的"…

PDF编辑工具,免费OCR识别表单

软件介绍 今天推荐一款功能全面的PDF编辑工具——PDF XChange Editor&#xff0c;支持文本、图片编辑及OCR识别&#xff0c;还能一键提取表单信息&#xff0c;满足多样化PDF处理需求。 软件优势 该软件完全免费&#xff0c;下载后双击图标即可直接运行&#xff0c;无需安装&…

OpenEnler等Linux系统中安装git工具的方法

在欧拉系统中安装 Git使用 yum 包管理器安装&#xff08;推荐&#xff0c;适用于欧拉等基于 RPM 的系统&#xff09;&#xff1a;# 切换到 root 用户&#xff08;若当前不是&#xff09; su - root# 安装 Git yum install -y git验证安装是否成功&#xff1a;git --version若输…

UE5 第三人称视角如何设置camera移动旋转

“奇怪&#xff0c;这blog不支持md格式吗”## 第1步&#xff1a;设置玩家Pawn 创建一个蓝图类&#xff0c;继承自 Pawn&#xff0c;在游戏模式&#xff08;Game Mode&#xff09;中&#xff0c;将这个Pawn设置为默认 在组件面板中&#xff0c;添加一个 Spring Arm 组件 在组件面…

OpenCV 入门教程:开启计算机视觉之旅

目录 一、引言​ 二、OpenCV 简介 ​&#xff08;一&#xff09;什么是 OpenCV &#xff08;二&#xff09;OpenCV 的特点与优势 &#xff08;三&#xff09;OpenCV 的应用领域 三、环境搭建 &#xff08;一&#xff09;安装 OpenCV 库​ 四、OpenCV 基础操作 &#xf…

C++高频知识点(十九)

文章目录91. TCP断开连接的时候为什么必须4次而不是3次&#xff1f;92. 为什么要区分用户态和内核态&#xff1f;93. 说说编写socket套接字的步骤1. 服务器端编写步骤1.1 创建套接字1.2 绑定套接字1.3 监听连接1.4 接受连接1.5 数据传输1.6 关闭套接字2. 客户端编写步骤2.1 创建…

一个基于 epoll 实现的多路复用 TCP 服务器程序,相比 select 和 poll 具有更高的效率

/*5 - 使用epoll实现多路复用 */ #include <stdio.h> // 标准输入输出函数库 #include <stdlib.h> // 标准库函数&#xff0c;包含exit等 #include <string.h> // 字符串处理函数 #include <unistd.h> // Unix标准函…

元数据管理与数据治理平台:Apache Atlas 通知和业务元数据 Notifications And Business Metadata

文中内容仅限技术学习与代码实践参考&#xff0c;市场存在不确定性&#xff0c;技术分析需谨慎验证&#xff0c;不构成任何投资建议。Apache Atlas 框架是一套可扩展的核心基础治理服务&#xff0c;使企业能够有效、高效地满足 Hadoop 中的合规性要求&#xff0c;并支持与整个企…

rem:CSS中的相对长度单位

&#x1f90d; 前端开发工程师、技术日更博主、已过CET6 &#x1f368; 阿珊和她的猫_CSDN博客专家、23年度博客之星前端领域TOP1 &#x1f560; 牛客高级专题作者、打造专栏《前端面试必备》 、《2024面试高频手撕题》、《前端求职突破计划》 &#x1f35a; 蓝桥云课签约作者、…

【10】C#实战篇——C# 调用 C++ dll(C++ 导出函数、C++导出类)

文章目录1 导出C 类函数 、导出 C函数1.1 .h文件1.2 .cpp 文件1.3 C# 调用2 C与C#数据类型对应3 保姆级教程&#xff08;项目搭建、代码、调用&#xff0c;图文并茂&#xff09;1 导出C 类函数 、导出 C函数 C 生成动态库.dll 详细教程&#xff1a; C 生成动态库.dll 及 C调用…