在 Mac 上用 Vagrant 安装 K8s

在这里插入图片描述

文章目录

    • 📋 1. 环境准备
      • 1.1 系统要求
      • 1.2 软件清单
    • 🚀 2. 安装步骤
      • 2.1 安装Parallels Desktop
      • 2.2 配置网络代理(可选)
      • 2.3 安装Homebrew
      • 2,4 准备项目目录
      • 2.5 安装Vagrant及插件
      • 2.6 配置Python环境
        • 2.6.1 安装Python管理工具
        • 2.6.2 配置Shell环境
        • 2.6.3 验证Python环境
      • 2.6.4 安装 pyenv
      • 2.6.5 升级Python工具
      • 2.6.6 创建Python虚拟环境
      • 2.7 配置Kubespray
        • 2.7.1 配置核心配置文件
          • 2.7.1.1 配置集群config.rb
          • 2.7.1.2 配置 containerd.yml(可选)
    • 🔧 3. 部署集群
      • 3.1 启动虚拟机并部署K8s
      • 3.2 如果部署失败,可以重试
    • 🎯 4. 配置kubectl访问
      • 4.1 安装kubectl客户端
      • 4.2 配置集群访问
    • 📦 5. 安装Helm(可选)
    • 🧹 6. 清理环境
      • 6.1 销毁虚拟机
      • 6.2 退出Python虚拟环境
    • 🛠️ 7. 故障排除
      • 7.1 常见问题
      • 7.2 有用的调试命令
    • 📝 8. 总结

本指南将帮助你在macOS上使用Kubespray、Vagrant和Parallels Desktop搭建一个完整的Kubernetes测试集群。

📋 1. 环境准备

1.1 系统要求

  • macOS(Apple Silicon或Intel)
  • 至少16GB内存
  • 50GB以上可用磁盘空间

1.2 软件清单

  • Parallels Desktop(商业版)
  • Homebrew
  • Vagrant + vagrant-parallels插件
  • Python 3.12+ 及虚拟环境
  • Git

🚀 2. 安装步骤

2.1 安装Parallels Desktop

💡 提示: 需要购买商业版许可证,可考虑在闲鱼等平台购买

安装完成后确保Parallels Desktop正常运行。

2.2 配置网络代理(可选)

如果你的网络环境需要代理,创建代理配置脚本:

vim ~/.zshrc

添加以下内容:

# 网络代理配置
proxy_url="http://172.0.0.1:7890"  # 修改为你的代理地址
export no_proxy="10.0.0.0/8,192.168.16.0/20,localhost,127.0.0.0/8,registry.ocp.local,.svc,.svc.cluster-27,.coding.net,.tencentyun.com,.myqcloud.com"# 代理控制函数
enable_proxy() {export http_proxy="${proxy_url}"export https_proxy="${proxy_url}"git config --global http.proxy "${proxy_url}"git config --global https.proxy "${proxy_url}"echo "✅ 代理已启用: ${proxy_url}"
}disable_proxy() {unset http_proxyunset https_proxygit config --global --unset http.proxygit config --global --unset https.proxyecho "❌ 代理已禁用"
}# 默认禁用代理
disable_proxy

应用配置:

source ~/.zshrc# 如需启用代理
enable_proxy

2.3 安装Homebrew

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"# 更新Homebrew
brew update

2,4 准备项目目录

# 创建项目目录
mkdir -p ~/Projects/k8s-testing
cd ~/Projects/k8s-testing# 克隆Kubespray项目
git clone https://github.com/upmio/kubespray-upm.git
cd kubespray-upm

2.5 安装Vagrant及插件

# 安装Vagrant
brew tap hashicorp/tap
brew install hashicorp/tap/hashicorp-vagrant# 验证安装
vagrant --version# 安装Parallels插件
vagrant plugin install vagrant-parallels# 查看已安装插件
vagrant plugin list

2.6 配置Python环境

2.6.1 安装Python管理工具
brew install  python
2.6.2 配置Shell环境
vim ~/.zshrc

添加以下配置:

# Python环境配置
alias python=python3
alias pip=pip3

应用配置:

source ~/.zshrc
2.6.3 验证Python环境
python --version
pip --version

2.6.4 安装 pyenv

安装依赖

brew install openssl readline sqlite3 xz zlib

安装pyenv

curl https://pyenv.run | bash
vim ~/.zshrc

添加以下配置:

export PYENV_ROOT="$HOME/.pyenv"
export PATH="$PYENV_ROOT/bin:$PATH"
eval "$(pyenv init -)"

应用配置:

source ~/.zshrc

2.6.5 升级Python工具

python -m pip install --upgrade pip
pyenv update

2.6.6 创建Python虚拟环境

# 安装Python 3.12.11
pyenv install 3.12.11# 创建专用虚拟环境
pyenv virtualenv 3.12.11 kubespray-3.12.11-env#为项目设置默认 Python 环境
pyenv local kubespray-3.12.11-env# 安装项目依赖
pip install -r requirements.txt

2.7 配置Kubespray

2.7.1 配置核心配置文件
# 复制Vagrant配置文件
cp vagrant_setup_scripts/Vagrantfile ./Vagrantfile# 创建vagrant配置目录
mkdir -p vagrant
2.7.1.1 配置集群config.rb
$ vim  vagrant/config.rb
# Vagrant configuration file for Kubespray
# Vagrant configuration file for Kubespray
# Kubespray Vagrant Configuration Sample
# This file allows you to customize various settings for your Vagrant environment
# Copy this file to vagrant/config.rb and modify the values according to your needs# =============================================================================
# PROXY CONFIGURATION
# =============================================================================
# Configure proxy settings for the cluster if you're behind a corporate firewall
# Leave empty or comment out if no proxy is needed# HTTP proxy URL - used for HTTP traffic
# Example: "http://proxy.company.com:8080"
# $http_proxy = ""
$http_proxy = "http://10.211.55.2:7890"# HTTPS proxy URL - used for HTTPS traffic
# Example: "https://proxy.company.com:8080"
# $https_proxy = ""
$https_proxy = "http://10.211.55.2:7890"# No proxy list - comma-separated list of hosts/domains that should bypass proxy
# Common entries: localhost, 127.0.0.1, local domains, cluster subnets
# Example: "localhost,127.0.0.1,.local,.company.com,10.0.0.0/8,192.168.0.0/16"
# $no_proxy = ""
$no_proxy = "localhost,127.0.0.1,192.168.0.0/16,10.0.0.0/8,172.16.0.0/12,::1,.demo.com"# Additional no proxy entries - will be added to the default no_proxy list
# Use this to add extra domains without overriding the defaults
# Example: ".internal,.corp,.k8s.local"
# $additional_no_proxy = ""
$additional_no_proxy = "localhost,127.0.0.1,192.168.0.0/16,10.0.0.0/8,172.16.0.0/12,::1,.demo.com"# =============================================================================
# ANSIBLE CONFIGURATION
# =============================================================================
# Ansible verbosity level for debugging (uncomment to enable)
# Options: "v" (verbose), "vv" (more verbose), "vvv" (debug), "vvvv" (connection debug)
#$ansible_verbosity = "vvv"# =============================================================================
# VIRTUAL MACHINE CONFIGURATION
# =============================================================================
# Prefix for VM instance names (will be followed by node number)
$instance_name_prefix = "k8s"# Default CPU and memory settings for worker nodes
$vm_cpus = 8                    # Number of CPU cores per worker node
$vm_memory = 16384              # Memory in MB per worker node (16GB)# Master/Control plane node resources
$kube_master_vm_cpus = 4        # CPU cores for Kubernetes master nodes
$kube_master_vm_memory = 4096   # Memory in MB for Kubernetes master nodes (4GB)# UPM Control plane node resources (if using UPM)
$upm_control_plane_vm_cpus = 12      # CPU cores for UPM control plane
$upm_control_plane_vm_memory = 24576 # Memory in MB for UPM control plane (24GB)# =============================================================================
# STORAGE CONFIGURATION
# =============================================================================
# Enable additional disks for worker nodes (useful for storage testing)
$kube_node_instances_with_disks = true# Size of additional disks in GB (200GB in this example)
$kube_node_instances_with_disks_size = "200G"# Number of additional disks per node
$kube_node_instances_with_disks_number = 1# Directory to store additional disk files
$kube_node_instances_with_disk_dir = ENV['HOME'] + "/kubespray_vm_disk/upm_disks"# Suffix for disk file names
$kube_node_instances_with_disk_suffix = "upm"# VolumeGroup configuration for additional disks
# Name of the VolumeGroup to create for additional disks
$kube_node_instances_volume_group = "local_vg_dev"# Enable automatic VolumeGroup creation for additional disks
$kube_node_instances_create_vg = true# =============================================================================
# CLUSTER TOPOLOGY
# =============================================================================
# Total number of nodes in the cluster (masters + workers)
$num_instances = 5# Number of etcd instances (should be odd number: 1, 3, 5, etc.)
$etcd_instances = 1# Number of Kubernetes master/control plane instances
$kube_master_instances = 1# Number of UPM control instances (if using UPM)
$upm_ctl_instances = 1# =============================================================================
# SYSTEM CONFIGURATION
# =============================================================================
# Vagrant Provider Configuration
# Specify the Vagrant provider to use for virtual machines
# If not set, Vagrant will auto-detect available providers in this order:
# 1. Command line --provider argument (highest priority)
# 2. VAGRANT_DEFAULT_PROVIDER environment variable
# 3. Auto-detection of installed providers (parallels > virtualbox > libvirt)
# 
# Supported options: "virtualbox", "libvirt", "parallels"
# 
# Provider recommendations:
# - virtualbox: Best for development and testing (free, cross-platform)
# - libvirt: Good for Linux production environments (KVM-based)
# - parallels: Good for macOS users with Parallels Desktop
# 
# Leave commented for auto-detection, or uncomment and set to force a specific provider
# $provider = "virtualbox"# Timezone for all VMs
$time_zone = "Asia/Shanghai"# Ntp Sever Configuration
$ntp_enabled = "True"
$ntp_manage_config = "True"# Operating system for VMs
# Supported options: "ubuntu2004", "ubuntu2204", "centos7", "centos8", "rockylinux8", "rockylinux9", etc.
$os = "rockylinux9"# =============================================================================
# NETWORK CONFIGURATION
# =============================================================================
# Network type: "nat" or "bridge"
#
# nat: Auto-detect provider network and assign IPs (recommended)
#   - Automatically detects provider default network (usually 192.168.x.0/24)
#   - Uses NAT networking for VM internet access
#   - VMs can communicate with each other and host
#   - Simpler setup, no bridge configuration required
#   - Recommended for development and testing
#
# bridge: Use bridge network with manual IP configuration
#   - Requires manual bridge interface setup on host
#   - VMs get IPs from same subnet as host network
#   - Direct network access, VMs appear as separate devices on network
#   - More complex setup, requires bridge configuration
#   - Recommended for production-like environments
$vm_network = "nat"# Starting IP for the 4th octet (VMs will get IPs starting from this number)
# Used in both nat (with auto-detected subnet) and bridge modes
$subnet_split4 = 100# The following network settings are only used when $vm_network = "bridge"
# For nat, subnet/gateway/netmask are auto-detected from provider# Network subnet (first 3 octets) - bridge only
# $subnet = "10.37.129"# Network configuration - bridge only
# $netmask = "255.255.255.0"      # Subnet mask
# $gateway = "10.37.129.1"        # Default gateway
# $dns_server = "8.8.8.8"         # DNS server
$dns_server = "10.211.55.2"  # (可选)如果配置私有dns,需要在macOS 安装 dns server,否则重启虚拟机,可能会有pod异常。
# Bridge network interface (required when using "bridge")
# Example: On linux, libvirt bridge interface name: br0
# $bridge_nic = "br0"
# Example: On linux, vitrulbox bridge interface name: virbr0
# $bridge_nic = "virbr0"# =============================================================================
# KUBERNETES CONFIGURATION
# =============================================================================
# Container Network Interface (CNI) plugin
# Options: "calico", "flannel", "weave", "cilium", "kube-ovn", etc.
$network_plugin = "calico"# Cert-Manager Configuration
$cert_manager_enabled = "True"             # Enable cert-manager# Local Path Provisioner Configuration
$local_path_provisioner_enabled = "False"    # Enable local path provisioner
$local_path_provisioner_claim_root = "/opt/local-path-provisioner/"  # Local path root# Ansible inventory directory
$inventory = "inventory/sample"# Shared folders between host and VMs (empty by default)
$shared_folders = {}# Kubernetes version to install
$kube_version = "1.33.3"
# Kubespray Vagrant Configuration Sample
2.7.1.2 配置 containerd.yml(可选)
$ vim inventory/sample/group_vars/all/containerd.yml
---
# Please see roles/container-engine/containerd/defaults/main.yml for more configuration options# containerd_storage_dir: "/var/lib/containerd"
# containerd_state_dir: "/run/containerd"
# containerd_oom_score: 0# containerd_default_runtime: "runc"
# containerd_snapshotter: "native"# containerd_runc_runtime:
#   name: runc
#   type: "io.containerd.runc.v2"
#   engine: ""
#   root: ""# containerd_additional_runtimes:
# Example for Kata Containers as additional runtime:
#   - name: kata
#     type: "io.containerd.kata.v2"
#     engine: ""
#     root: ""# containerd_grpc_max_recv_message_size: 16777216
# containerd_grpc_max_send_message_size: 16777216# Containerd debug socket location: unix or tcp format
# containerd_debug_address: ""# Containerd log level
# containerd_debug_level: "info"# Containerd logs format, supported values: text, json
# containerd_debug_format: ""# Containerd debug socket UID
# containerd_debug_uid: 0# Containerd debug socket GID
# containerd_debug_gid: 0# containerd_metrics_address: ""# containerd_metrics_grpc_histogram: false# Registries defined within containerd.
containerd_registries_mirrors:- prefix: quay.iomirrors:- host: https://quay.nju.edu.cncapabilities: ["pull", "resolve"]skip_verify: false- host: http://harbor.demo.comcapabilities: ["pull", "resolve"]skip_verify: true- prefix: docker.iomirrors:- host: http://harbor.demo.comcapabilities: ["pull", "resolve"]skip_verify: true- host: https://dockerproxy.comcapabilities: ["pull", "resolve"]skip_verify: false- prefix: ghcr.iomirrors:- host: https://ghcr.nju.edu.cncapabilities: ["pull", "resolve"]skip_verify: false- host: https://ghcr.dockerproxy.comcapabilities: ["pull", "resolve"]skip_verify: false- prefix: registry.k8s.iomirrors:- host: https://k8s.mirror.nju.edu.cncapabilities: ["pull", "resolve"]skip_verify: false- host: https://k8s.dockerproxy.comcapabilities: ["pull", "resolve"]skip_verify: false# containerd_max_container_log_line_size: -1

🔧 3. 部署集群

3.1 启动虚拟机并部署K8s

⚠️ 注意: 此过程需要10-15分钟,具体时间取决于网络状况和硬件性能

vagrant up --no-parallel

3.2 如果部署失败,可以重试

vagrant provision --provision-with ansible

🎯 4. 配置kubectl访问

4.1 安装kubectl客户端

# 下载kubectl(Apple Silicon版本)
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl"# 设置执行权限并移动到PATH目录
chmod +x kubectl && mv kubectl /usr/local/bin/kubectl# 验证安装
kubectl version --client

4.2 配置集群访问

# 复制kubeconfig文件
mkdir -p ~/.kube
cp inventory/sample/artifacts/admin.conf ~/.kube/config# 验证集群连接
kubectl get nodes
kubectl get pods --all-namespaces

📦 5. 安装Helm(可选)

# 下载安装脚本
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3# 执行安装
chmod 700 get_helm.sh
./get_helm.sh# 验证安装
helm version

🧹 6. 清理环境

6.1 销毁虚拟机

vagrant destroy -f

6.2 退出Python虚拟环境

pyenv deactivate

🛠️ 7. 故障排除

7.1 常见问题

1. Vagrant启动失败

  • 检查Parallels Desktop是否正常运行
  • 确认系统资源充足(内存、磁盘空间)
  • 检查网络连接状态

2. Python依赖安装失败

  • 确认已激活正确的虚拟环境
  • 尝试升级pip:pip install --upgrade pip
  • 检查网络代理设置

3. kubectl无法连接集群

  • 确认kubeconfig文件路径正确
  • 检查虚拟机网络状态:vagrant status
  • 验证SSH连接:vagrant ssh

4. 网络问题

  • 如在国内环境,建议配置代理
  • 可以尝试使用国内镜像源

7.2 有用的调试命令

# 检查Vagrant状态
vagrant status# 查看虚拟机日志
vagrant ssh -c "sudo journalctl -u kubelet"# 重新加载Vagrant配置
vagrant reload# 查看集群状态
kubectl cluster-info
kubectl get componentstatuses

📝 8. 总结

通过以上步骤,你应该已经成功搭建了一个基于Kubespray的Kubernetes测试集群。这个环境非常适合:

  • 学习Kubernetes核心概念
  • 测试应用部署
  • 验证集群配置
  • 开发云原生应用

💡 提示: 建议定期备份重要的配置文件和项目代码,避免因误操作导致数据丢失。


相关资源链接:

  • Kubespray官方文档
  • Vagrant官方文档
  • Kubernetes官方文档
  • Helm官方文档

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。
如若转载,请注明出处:http://www.pswp.cn/news/917314.shtml
繁体地址,请注明出处:http://hk.pswp.cn/news/917314.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

【AI学习】RadioDiff:代码学习

之前学习了RadioDiff这篇论文,最近在复刻相关代码。 这段代码实现了一个基于潜在扩散模型(Latent Diffusion Model, LDM)的训练框架。借助DeepSeek总体学习一下: 1. 整体结构 代码主要分为以下几个部分: 参数解析和…

【专题十七】多源 BFS

📝前言说明: 本专栏主要记录本人的基础算法学习以及LeetCode刷题记录,按专题划分每题主要记录:(1)本人解法 本人屎山代码;(2)优质解法 优质代码;&#xff…

京东零售在智能供应链领域的前沿探索与技术实践

近日,“智汇运河 智算未来”2025人工智能创新创业大会在杭州召开。香港工程科学院院士、香港大学副校长、研究生院院长、讲座教授、京东零售供应链首席科学家申作军教授与供应链算法团队技术总监戚永志博士受邀出席并担任《AI智慧物流与供应链分享会》联席主席&…

MyBatisPlus之CRUD接口(IService与BaseMapper)

MyBatisPlus之CRUD接口—IService与BaseMapper一、BaseMapper与IService的关系二、BaseMapper核心方法详解2.1 新增操作(Insert)2.2 查询操作(Select)2.3 更新操作(Update)2.4 删除操作(Delete&…

axios请求的取消

axios请求的取消解决:axios请求的取消解决:axios请求的取消 在使用 Axios 发起请求时,有时候你可能需要取消这些请求,比如当组件销毁时或者用户操作导致不再需要获取之前发起的请求结果。Axios 支持通过 Cancel Token 取消请求。 …

深入理解C++中的Lazy Evaluation:延迟计算的艺术

在编程世界里,“最好的运算就是从未执行的运算” —— 这句话深刻揭示了性能优化的核心思路。如果一个计算过程最终不会被使用,那么提前执行它就是纯粹的资源浪费。这种思想衍生出了 Lazy Evaluation(缓式评估) 技术:延…

php完整处理word中表单数据的方法

使用php基础方式实现word中表单处理<?php/*** zipFile 类用于处理 .docx 文件的解压、修改和重新打包*/ class zipFile {/** var ZipArchive ZIP 文件对象 */private $zipFile;/** var string 临时目录路径 */private $tempDir;/** var string 嵌入的 Excel 文件临时目录路…

Node.js 操作 MongoDB

目录 Node.js 操作 MongoDB 一、什么是 MongoDB&#xff1f; 二、MongoDB 的功能概览 三、MongoDB 的安装与启动 安装 MongoDB&#xff08;以本地安装为例&#xff09; 启动 MongoDB 四、Node.js 如何连接 MongoDB&#xff1f; 使用 Mongoose ODM 工具 建立连接 五、…

先学Python还是c++?

选择先学Python还是C&#xff0c;取决于你的学习目标、应用场景和职业规划。以下是两者的对比分析和建议&#xff0c;帮助你做出更适合自己的选择&#xff1a;一、核心差异对比维度PythonC学习曲线简单易上手&#xff08;语法接近自然语言&#xff09;复杂&#xff08;需理解指…

Trae + Notion MCP:将你的Notion数据库升级为智能对话机器人

前言 Notion作为一款功能强大的信息管理工具&#xff0c;被广泛用于项目跟踪、知识库构建和数据整理。然而&#xff0c;随着数据量的增长&#xff0c;我们常常会发现自己陷入了重复和繁琐的操作中。比如&#xff0c;为了找到符合特定条件的几条数据&#xff0c;需要在庞大的数…

【iOS】retain/release底层实现原理

文章目录前言前情知识retain和release的实现原理&#xff08;MRC手动管理&#xff09;retain&#xff08;MRC手动管理&#xff09;retain源码内联函数rootRetain源码相关的sidetable_tryRetain()方法retain底层工作流程总结releaserelease源码内联函数rootRelease源码小结前言 …

文件同步神器-rsync命令讲解

rsync 是一个强大的文件同步与传输工具&#xff0c;广泛用于本地或远程服务器之间的高效文件备份、镜像或同步。其核心优势是通过增量传输​&#xff08;仅传输文件差异部分&#xff09;和压缩减少数据传输量&#xff0c;同时支持保留文件元数据&#xff08;如权限、时间戳、所…

Rust: 工具链版本更新

遇到 cargo build --release 错误&#xff0c;比如&#xff0c;当前 Rust 工具链版本&#xff08;1.78.0&#xff09;低于依赖项所需的最低版本&#xff08;部分依赖要求 ≥1.82.0&#xff09;。以下是系统化的解决方案&#xff1a; &#x1f527; 一、升级 Rust 工具链&#x…

Prompt-to-Prompt| 修改Attention会有“反向传播”或梯度计算?

需要注意的几个问题&#xff1a;额外计算开销&#xff1a;Cross-Attention Control原因&#xff1a;Prompt-to-Prompt的编辑方法需要动态干预交叉注意力&#xff08;Cross-Attention&#xff09;层的权重&#xff0c;这会引入额外的计算和显存占用&#xff1a;需要缓存注意力矩…

电商API接口的优势、数据采集方法及功能说明

一、电商API接口的核心优势1. 高效性与准确性数据采集效率&#xff1a;API通过标准化参数&#xff08;如商品ID、类目&#xff09;直接获取结构化数据&#xff08;JSON/XML&#xff09;&#xff0c;无需解析HTML&#xff0c;减少误差。例如&#xff0c;采集1000条商品信息&…

iOS企业签名掉签,iOS企业签名掉签了怎么办?

不能上架到App Store的iOS应用 &#xff0c;几乎每一个开发者的选择都是通过iOS签名这种内测渠道来完成APP的上架任务&#xff0c;最常用的就是企业签名、超级签名以及TF上架&#xff0c;其中最受欢迎的当属于企业签名了。不过企业签名会出现掉签的现象&#xff0c;那么企业签名…

存储成本深度优化:冷热分层与生命周期管理——从视频平台年省200万实践解析智能存储架构

一、冷热分层&#xff1a;存储成本优化的核心逻辑1.1 数据访问的“二八定律”据行业统计&#xff0c;80%的访问集中在20%的热数据上&#xff0c;而超过90天的历史数据访问频率下降70%以上。某视频平台存储超10PB媒体文件&#xff0c;未分层前年存储成本高达680万元&#xff0c;…

Java设计模式之《备忘录模式》

目录 1. 概念 1.1、定义 1.2、适用场景 2、角色划分 3、实现 1、Originator&#xff08;发起人&#xff09; 2、Memento&#xff08;备忘录&#xff09; 3、Caretaker&#xff08;管理者&#xff09; 4、使用示例 4、优缺点 4.1、优点 4.2、缺点 前言 备忘录模式是…

SpringBoot 多环境配置

在实际项目开发中&#xff0c;不同环境往往有不同的配置需求&#xff1a; 开发环境&#xff08;dev&#xff09;&#xff1a;本地调试&#xff0c;连接测试数据库&#xff1b;测试环境&#xff08;test&#xff09;&#xff1a;接口联调&#xff0c;接近真实场景&#xff1b;生…

延凡智慧医院数字孪生平台

延凡智慧医院数字孪生平台是延凡科技依托物联网、数字孪生、AI 算法及边缘计算技术打造的医疗场景全要素数字化解决方案&#xff0c;通过构建医院物理实体与虚拟空间的实时映射&#xff0c;实现医疗资源优化、运营效率提升及患者体验升级。一、平台价值&#xff08;一&#xff…