(AI) Server (Hardware) Architecture

Overview by Atlas T800

Just found a good product demo. from Huawei for its Atlas T800, here 计算产品3D展示

First turn off all modules and we can delve into how this server is organized.

Core

This is an AI server with 910B as its main feature, which is shown with 

"NPU Module"

now click "NPU carrier board" and "NPU heat sink" we get the foundation for the server.

Next, show "Mainboad", and see that it's mounted on top of the 8 NPUs, with 4 CPUs as its central controller.

Click on "DIMM", these are cheaper off-chip memory extensions, perhaps useful for RAG; they are not sufficiently fast for high performance inference, but could be useful for some hierarchical caching hacks.

Now click to include "CPU heat sink" and we have the core components of this server.

Storage

Add "Driver" and "Driver Backplane"; zoom in to the driver and checkout the red dot for spec, which is "• 8x2.5 SATA+2x2.5 NVMe • 4x2.5 SATA+6x2.5 NVMe" and this made clear they are harddrive drivers.

(Gemini)

SATA and NVMe are both interface standards for connecting storage devices (like SSDs) to a computer, but they differ in performance and technology. SATA is an older, slower standard, while NVMe is a newer, faster standard designed for modern SSDs. 

Here's a more detailed breakdown:

SATA (Serial ATA):

  • Interface: Uses the SATA bus for communication with the computer. 
  • Performance: While faster than older HDD technologies, it's limited by the SATA bus's bandwidth. 
  • Typical Speeds: SATA-based SSDs can achieve read/write speeds up to around 550 MB/s. 
  • Form Factors: Can be found in both 2.5-inch and M.2 form factors. 

NVMe (Non-Volatile Memory Express):

  • Interface:

    Uses the PCIe bus for communication, allowing for much higher bandwidth and faster speeds. 

  • Performance:

    Significantly faster than SATA, with speeds reaching up to 7000 MB/s or even higher with newer PCIe 4.0 and 5.0 standards. 

  • Key Advantages:

    NVMe offers lower latency (faster response times) and higher throughput (more data transfer per unit of time) than SATA. 

  • Form Factors:

    Most commonly found in M.2 form factor, but can also be found in other form factors according to Corsair. 

In essence:

  • If you need the fastest possible storage performance for demanding tasks like gaming, video editing, or running resource-intensive applications, NVMe is the better choice. 
  • If you're looking for a more affordable option and don't need top-tier performance, SATA SSDs can still offer a good speed boost compared to traditional hard drives. 

Scaling-Out

If you checkout 910B's spec, you should realize a single server "box" with 8 of them is not enough to handle cutting edge models and you'd likely need to buy or rent multiple such boxes and have them work together either to scale out your operations or accelerate your research.

Networking/NIC

FlexIO

In principle, each server can simply talk to each other via networks and call it a day. Indeed the "FlexIO card" seems to be, according to Gemini's research, a NIC providing this basic ability.

Given the context of a high-performance server like the Huawei Atlas 800T, and based on the search results, a "FlexIO card" is a type of network interface card (NIC) designed for flexible and high-speed network connectivity.

Here's a breakdown of what it likely does:

  1. High-Speed Network Connectivity: The search results specifically mention "FlexIO card with four 25GE optical ports" and support for "25GE or 10GE optical port rate," and even "400 Gbit/s NIC interconnection." This strongly indicates its role in providing very fast network connections. In an AI training server like the Atlas 800T, massive amounts of data need to be moved between servers, storage, and other resources, so high-bandwidth networking is crucial.

    ==> GE: Gigabit Ethernet

    ==> I cannot confirm the 400Gbits number in the context of AI server PCIe interconnect; in fact, the inference cfg. offers up to 56G IB with optical fiber, https://support.huawei.com/enterprise/en/doc/EDOC1100115124/8eef1969/flexio-cardsand as we shall see in next section, the "parameter plance interface card" which is likely dedicated to AI scene communication workloads, only offer up to 200Gbits per port and 2 ports in total https://support.huawei.cn/enterprise/en/doc/EDOC1100349804/91ac4707/parameter-plane-interface-card
  2. Flexible I/O (Input/Output): The "Flex" in FlexIO likely refers to its ability to offer flexible configuration options for network interfaces. This could mean:

    • Multiple Port Speeds: Supporting different speeds like 10GE, 25GE, or even higher, allowing users to choose the appropriate bandwidth for their needs.
    • Different Connector Types: While the search results mention optical ports, a "FlexIO" concept could imply the ability to swap modules for different physical connectors (e.g., copper or optical fiber).

    • Configurable Bandwidth: Some server documentation refers to configurable link bandwidth (e.g., "The default link bandwidth of FlexIO card 1 is x8. The link bandwidth can be extended to x16 using cables."). This suggests the card can adapt its PCIe lane allocation or internal configuration for different performance needs.

  3. OCP 3.0 Standard Support: Several results indicate "FlexIO card slots support two OCP 3.0 NICs." OCP (Open Compute Project) 3.0 is an industry standard for network interface cards, particularly for servers in data centers. It aims to standardize form factors, interfaces, and power delivery for network adapters, promoting interoperability and flexibility. This means a FlexIO card is likely built to this standard, allowing for easy hot-swapping and upgrades.

  4. Specialized for Server Environments: Unlike general-purpose NICs, FlexIO cards are designed with server-specific features, such as:

    • PXE (Preboot Execution Environment) support: For network booting.

    • Indicators: For connection status and data transmission (e.g., steady green for connected, blinking yellow for data transmission).

    • Integration with server management: Allowing monitoring and configuration through the server's iBMC (Intelligent Baseboard Management Controller).

In essence, a "FlexIO card" in the Atlas 800T is a high-performance, flexible network adapter that provides the essential high-bandwidth connectivity needed for AI training and other data-intensive workloads in a server environment, often adhering to the OCP 3.0 standard.

Parameter Plane Interface Card

confirm source: https://support.huawei.com/enterprise/en/doc/EDOC1100370559/639da386/parameter-plane-interface-card

https://support.huawei.cn/enterprise/en/doc/EDOC1100349804/91ac4707/parameter-plane-interface-card

In the context of a high-performance server, especially one designed for AI training like the Atlas 800T, the term "Parameter plane interface card" would refer to a specialized network interface card (NIC) that handles the communication related to the parameters of an AI model.

Let's break down what that means:

  • Distributed AI Training: Training very large AI models (like large language models with billions of parameters) often requires many servers (and their GPUs/NPUs) to work together in a "distributed" fashion. This means the model's parameters, or updates to them, need to be shared and synchronized across all these connected computing nodes.
  • The "Parameter Plane": This refers to the network infrastructure and communication patterns specifically dedicated to the exchange of these model parameters and their gradients (information about how to adjust the parameters). This is distinct from the "data plane" (which handles the input data for training) or the "control plane" (which handles management and control signals).

  • Why a Dedicated Interface Card?

    • Extremely High Bandwidth: AI models can have billions of parameters. During training, these parameters or their updates (gradients) need to be exchanged frequently and quickly between the various GPUs/NPUs working on the model. This requires immense network bandwidth, often in the hundreds of Gigabits per second (e.g., 400GE or even higher with InfiniBand or specialized interconnects like NVLink over Ethernet).
    • Low Latency: The speed at which parameters are exchanged directly impacts the efficiency and convergence time of distributed training. High latency can bottleneck the training process. A dedicated card can optimize for low-latency communication specific to parameter synchronization.

    • Offloading CPU: Just like a RAID card offloads storage tasks, a "Parameter plane interface card" would likely have specialized hardware to handle the specific protocols and operations involved in parameter exchange (e.g., collective communication operations like All-Reduce), freeing up the main CPUs and NPUs for the actual AI computations.

    • Specialized Protocols: It might support or accelerate protocols like RoCE (RDMA over Converged Ethernet) or InfiniBand, which are commonly used in high-performance computing and AI clusters for their low-latency, high-throughput capabilities.

Storage Extension: RAID

A "RAID card," also known as a RAID controller card or disk array controller, is a hardware component (typically a PCIe expansion card in a server like the Atlas 800T) that manages multiple hard drives or solid-state drives (SSDs) and presents them to the operating system as a single, logical storage unit.

Its primary purpose is to implement RAID (Redundant Array of Independent Disks) levels, which are data storage virtualization technologies designed to improve either data redundancy/protection, performance, or both.

Here's a breakdown of what RAID cards do and why they're used:

Key Functions of a RAID Card:

  1. RAID Level Implementation: The core function is to handle the complex calculations and data distribution required for various RAID levels. Instead of the server's main CPU doing this, the RAID card has its own dedicated processor (often called a "RAID-on-Chip" or ROC) and memory (cache) to perform these tasks, offloading the workload from the main system.
  2. Data Redundancy/Protection: This is one of the most crucial benefits. By distributing or mirroring data across multiple drives, a RAID card can protect against data loss in the event of one or more drive failures. Different RAID levels offer different levels of redundancy.
  3. Performance Enhancement: RAID can significantly improve read and write speeds by striping data across multiple drives, allowing for parallel data access.
  4. Storage Capacity Management: It aggregates multiple physical drives into a single logical volume, simplifying storage management for the operating system.
  5. Hot-Swapping and Rebuilding: Many RAID cards support hot-swapping, allowing a failed drive to be replaced while the server is still running. They then manage the automatic rebuilding of the data onto the new drive using the redundancy information.
  6. Cache Management: Most hardware RAID cards include onboard cache memory (DRAM). This cache temporarily stores data being written or read, significantly improving performance. Some also have battery backup units (BBUs) or supercapacitors to protect data in the cache during a power failure.

Why are RAID cards important in servers like the Atlas 800T?

  • Mission-Critical Data: Servers often handle critical applications and vast amounts of data (especially in AI/ML workloads). Data loss is unacceptable. RAID cards provide the necessary redundancy to ensure data availability and minimize downtime in case of a drive failure.
  • High Performance Requirements: AI training, large databases, and virtualization platforms demand very high I/O (input/output) performance. By striping data across multiple drives, RAID cards can dramatically increase the speed at which data can be read from and written to storage.
  • Scalability: Servers need to be able to scale their storage capacity. RAID cards facilitate this by allowing the addition of more drives into an existing array, managed as a single unit.

  • Offloading CPU: Performing RAID calculations in software consumes CPU resources. A hardware RAID card dedicates its own processor to these tasks, freeing up the server's main CPUs to focus on their primary computational roles (e.g., running AI models).

Common RAID Levels (managed by RAID cards):

  • RAID 0 (Striping): Spreads data across multiple drives. Excellent performance (read/write) and full capacity utilization, but no redundancy. If one drive fails, all data is lost. (Used where speed is paramount and data loss is acceptable or handled by other backup methods).
  • RAID 1 (Mirroring): Duplicates data on two or more drives. Provides excellent redundancy (can lose all but one drive and still function) but reduces usable capacity by half (or more, depending on the number of mirrors). Good read performance, write performance similar to a single drive.
  • RAID 5 (Striping with Parity): Spreads data and parity information across three or more drives. Can withstand the failure of one drive. Offers a good balance of performance, capacity utilization, and redundancy.
  • RAID 6 (Striping with Double Parity): Similar to RAID 5 but includes two independent parity blocks, allowing it to withstand the failure of two drives simultaneously. Requires at least four drives. Offers even greater data protection than RAID 5, at the cost of slightly more storage overhead and potentially slower writes.

  • RAID 10 (RAID 1+0 - Striping of Mirrors): Combines RAID 1 and RAID 0. Data is mirrored in pairs, and then those mirrored pairs are striped. Requires at least four drives. Offers both high performance and excellent redundancy (can lose multiple drives, as long as they are not the mirrored pair).

In essence, a RAID card is a specialized piece of hardware that acts as a sophisticated traffic controller and data protector for a server's storage drives, ensuring both performance and reliability.

==> ?? by its functionality description, it's rather strange RAID should be placed at the opposite end of the driver array. Obviously they need the "riser module" below for interconnection interfacing, but still, the pathing to local drivers is rather abysmal.

====> My guess, mainly mechanical, as we shall see later fan takes most spaces on SSD driver side. Also because we don't demand high bandwidth and low latency from SSD, the longer pathing doesn't really introduce power consumption issues.

Compute Extension: Riser

In a server like the Atlas 800T, a "riser module" (often called a "riser card" or "PCIe riser") is a crucial component that allows for the installation of additional expansion cards, typically PCIe (Peripheral Component Interconnect Express) cards, in a different orientation than directly on the motherboard.

==> cornerstone ref: https://en.wikipedia.org/wiki/Riser_card

Here's a breakdown of its purpose and why it's vital in servers like the Atlas 800T:

  1. Space Optimization (Especially in Rack-Mount Servers):

    • Servers, particularly those designed for rack mounting (like the 4U form factor of the Atlas 800T), have strict height limitations.
    • Directly mounting many PCIe cards (such as GPUs, network cards, or storage controllers) vertically onto the motherboard would make the server too tall to fit into standard rack units.

    • Riser modules provide slots that are offset, often horizontally or at an angle, allowing these expansion cards to lie parallel to the motherboard or in a way that fits within the server's compact chassis. This is particularly important for accommodating multiple large AI accelerator cards.

  2. Increased Expansion Capabilities:

    • Motherboards have a limited number of direct PCIe slots. Riser modules effectively extend the available slots.
    • In the Atlas 800T, which is designed for intensive AI training, you'd likely want to install multiple high-performance AI processors (NPUs), high-speed network interface cards (NICs), or specialized storage controllers. Riser modules enable this high density of expansion cards.

    • The Huawei documentation for related servers (like the TaiShan 200 series, which shares architectural similarities with the Atlas 800T) explicitly mentions that riser modules provide specific PCIe slots (e.g., "Riser module 1 provides the following PCIe slots: One standard FHFL standard PCIe 4.0 slot. One standard FHHL standard PCIe 4.0 slot."). This confirms their role in offering additional PCIe connectivity.
  3. Flexibility and Modularity:

    • Different server configurations might require different types and numbers of expansion cards. Riser modules often come in various configurations (e.g., with different numbers or types of PCIe slots, or supporting full-height, half-length, or full-length cards).
    • This modularity allows server manufacturers to offer customizable server builds to meet diverse customer needs without redesigning the entire motherboard.

  4. Airflow Management:

    • By orienting expansion cards differently, riser modules can also contribute to better airflow within the server chassis. This is critical for cooling high-power components and maintaining optimal operating temperatures, especially when many heat-generating components are installed.

In summary, the "riser module" in the Atlas 800T server is essentially an adapter board that allows the server to accommodate more PCIe expansion cards than would be possible by mounting them directly on the motherboard, while also optimizing space and airflow within the constrained server chassis.

"4U" Form Factor

The "4U" in a server's form factor refers to its height when mounted in a standard server rack.

Here's a breakdown:

  • "U" stands for Rack Unit. This is a standardized unit of measurement used for the height of equipment designed to be installed in a server rack.
  • One Rack Unit (1U) is equal to 1.75 inches (or 44.45 millimeters) in height.
  • Therefore, a 4U server is 4 x 1.75 inches = 7 inches (or 177.8 mm) tall.

Why is the "U" form factor important?

  1. Standardization: It ensures that servers and other networking equipment from different manufacturers can fit perfectly into universal server racks, which typically have a standard width of 19 inches (though 23-inch racks also exist, 19-inch is most common for IT).
  2. Space Optimization: Data centers and server rooms are designed to maximize the use of vertical space. Knowing the "U" rating allows administrators to plan how many servers and other devices can fit into a given rack. For example, a common full-size server rack is 42U tall, meaning it can hold 42 1U servers, or ten 4U servers with some space left over, etc.
  3. Component Capacity:

    • Larger "U" (like 4U): A larger form factor like 4U means the server chassis is taller, providing more physical space inside. This extra space is crucial for:

      • More Processors/Memory: Accommodating multiple CPUs and a greater number of RAM modules.
      • More Expansion Slots (PCIe): Allowing for more and larger expansion cards like GPUs, AI accelerators (which the Atlas 800T uses heavily), high-speed network cards, and RAID controllers. This is a primary reason why high-performance AI servers are often 4U or even larger.
      • More Storage Drives: Room for a larger number of hard drives or SSDs. ==> this one is a bit fishy, likely from non-AI sources; more SSDs doesn't contributes that much for LLM inference; they may however, provide cheap local storage for training.
      • Better Cooling: Larger servers generally have more volume for airflow and can accommodate more robust cooling solutions (larger fans, more heatsink surface area, or even liquid cooling components), which is essential for high-power components.
      • More Powerful Power Supplies: More space can allow for multiple, higher-wattage power supply units (PSUs) for redundancy and increased power delivery.

    • Smaller "U" (like 1U or 2U): These are more compact and typically used for applications where space is at a premium and extreme computational density or a large number of expansion cards isn't the primary requirement.

In summary, a 4U server is a relatively large server designed to be rack-mounted, offering significant internal space for high-performance components, extensive expansion capabilities, and robust cooling, making it well-suited for demanding workloads like AI training (as seen with the Atlas 800T).

Power

Supply

Add "PSU"

A PSU in a server stands for Power Supply Unit. It's an absolutely critical component, as it's responsible for converting the alternating current (AC) electricity from the wall outlet into direct current (DC) power that all the internal components of the server (motherboard, CPU, GPUs, RAM, drives, etc.) can use.

Here's a breakdown of its importance and common features in servers:

What a Server PSU Does:

  1. AC to DC Conversion: The primary function is to convert the high-voltage AC from the power grid into the various low-voltage DC currents required by different server components (e.g., +12V, +5V, +3.3V).
  2. Voltage Regulation: It ensures that the DC output voltages are stable and within the tight tolerances required for sensitive electronic components, preventing damage and ensuring reliable operation.
  3. Power Distribution: It provides dedicated cables and connectors to deliver power to every component in the server.
  4. Protection Features: High-quality PSUs include various protection mechanisms against issues like over-voltage, under-voltage, over-current, short circuits, and over-temperature.
  5. Cooling: PSUs typically have their own fans to cool their internal components, and often contribute to the overall airflow within the server chassis.

Notable Aspects of PSUs in Servers (and likely the Atlas 800T):

  1. High Wattage: Servers, especially high-performance ones like the Atlas 800T (which likely houses multiple powerful AI accelerators like the Ascend 910), consume a tremendous amount of power. Therefore, their PSUs need to have very high wattage ratings (e.g., 2000W, 3000W, or even higher). ==> 2600W in our case

  2. Redundancy (N+1 or N+N): This is a key feature for server PSUs and is almost certainly present in the Atlas 800T. ==> 2 + 2

    • N+1 Redundancy: Means there are "N" power supplies required to run the server, plus one additional spare. If one PSU fails, the others can seamlessly take over the full load without interrupting server operation.
    • Hot-Swappable: Redundant PSUs are typically hot-swappable, meaning they can be removed and replaced while the server is still running, without downtime. This is crucial for mission-critical applications where continuous operation is paramount.
    • The Atlas 800T is confirmed to have hot-swappable components, and typically this includes PSUs in redundant configurations.
  3. High Efficiency (80 PLUS Certification): Server PSUs are designed for very high energy efficiency, often carrying 80 PLUS Gold, Platinum, or Titanium certifications. ==> the naming sounds ridiculous but it's true, here is a source 80 Plus Ruby PSU Efficiency Rating – CSB IAS Academy
    • This certification indicates how much of the AC power drawn from the wall is actually converted into usable DC power, with the remainder lost as heat.
    • For example, an 80 PLUS Titanium PSU can achieve up to 94% efficiency at 50% load.
    • High efficiency is vital in data centers to reduce electricity costs, minimize heat generation (which in turn reduces cooling costs), and contribute to a lower environmental footprint.
  4. Hot-Swappable Design: As mentioned for redundancy, the ability to hot-swap a failed PSU is fundamental in a server environment like the Atlas 800T, ensuring maximum uptime.
  5. PMBus (Power Management Bus) Support: Many modern server PSUs, including those in high-end systems, support PMBus. This allows the server's management controller (like Huawei's iBMC) to monitor various PSU parameters (voltage, current, temperature, fan speed, efficiency) and even control certain aspects of the PSU remotely. This facilitates proactive maintenance and troubleshooting.

Heat Dissipation

We already toggled on the heat sinks for cores, now checkout the macro components: "fan, "mainboard air duct" and "copper bar".

The purpose for "fan" and "air duct" is obvious, while their design principles are really quite beyond my caring, so let's focus on "copper bar":

In high-performance servers like the Atlas 800T, a copper bar module serves two primary, critical functions: power distribution and heat dissipation.

Here's why they are essential:

  1. Efficient Power Distribution:

    • High Current Capacity: Modern AI training servers like the Atlas 800T contain powerful components, especially AI processors (like the Ascend 910), that consume significant amounts of power. Copper busbars (which is what a "copper bar module" typically refers to in this context) are excellent electrical conductors, allowing for the efficient transmission of high currents with minimal power loss due compared to traditional cabling.
    • Reduced Resistance: Copper has very low electrical resistivity, meaning less energy is wasted as heat during power transmission. This is crucial for energy efficiency in data centers, which have massive power demands.
    • Compact Design: Busbars can be designed to be flat and compact, allowing for denser power delivery within the limited space of a server chassis, especially compared to bundles of thick wires. This helps with the overall server design and airflow.
    • Reliability: Busbar systems offer a more reliable and organized power delivery system, reducing wiring errors and simplifying connections.
  2. Effective Heat Dissipation:

    • High Thermal Conductivity: Copper is one of the best thermal conductors. The high power consumption of components like CPUs and NPUs generates a lot of heat. Copper bars can act as a pathway to efficiently transfer this heat away from the hot components.
    • Integration with Cooling Systems: While the Atlas 800T can use both air and liquid cooling, copper is a fundamental material in heat dissipation regardless of the cooling method. It forms the base of heatsinks, and can also be part of more advanced liquid cooling solutions. The copper bar module might be directly involved in drawing heat from specific components to a larger cooling system.

    • Maintaining Optimal Temperatures: By efficiently conducting and dissipating heat, copper bars help maintain the optimal operating temperatures for sensitive electronic components. This prevents overheating, which can lead to performance throttling, instability, and reduced lifespan of the hardware.

In essence, the copper bar module in the Atlas 800T server is a testament to the need for highly efficient and robust power delivery and thermal management solutions in high-density, high-performance computing environments, particularly those designed for AI workloads.

==> although I listed it under Heat Dissipation, judging by its geometry and positioning, it's most likely only for better power distribution.

Mechanical

It goes without saying, the "support beam", "chasis" and "chasis cover'. For a (mechanically) highly stable and controlled environment as the data center, this part is perhaps least crucial. Just note that each server "box" is refered to as a "chasis" and shall be stacked into a "rack"; and I have on good account, the chasis is pretty darn heavy, so take care when handling one :) 

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。
如若转载,请注明出处:http://www.pswp.cn/web/91472.shtml
繁体地址,请注明出处:http://hk.pswp.cn/web/91472.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

【NLP舆情分析】基于python微博舆情分析可视化系统(flask+pandas+echarts) 视频教程 - 微博评论数据可视化分析-用户评论词云图实现

大家好,我是java1234_小锋老师,最近写了一套【NLP舆情分析】基于python微博舆情分析可视化系统(flaskpandasecharts)视频教程,持续更新中,计划月底更新完,感谢支持。今天讲解微博评论数据可视化分析-用户评论词云图实现…

【Linux学习|黑马笔记|Day1】Linux初识、安装VMware Workstation、安装CentOS7、远程连接、虚拟机快照

Linux DAY1 前言 因为之前MySQL学到安装Linux版本的MySQL了,需要安装虚拟机等等,所以我打算先学完Linux的全部课程,期间继续学MySQL 文章目录Linux DAY1一.1)操作系统概述2)Linux初识3)虚拟机4.1&#xff…

编程与数学 03-002 计算机网络 13_无线网络技术

编程与数学 03-002 计算机网络 13_无线网络技术一、无线网络的基本概念(一)无线通信的频段与标准(二)无线网络的优势与挑战二、无线局域网(WLAN)(一)802.11标准系列(二&a…

肖特基二极管MBR0540T1G 安森美ON 低电压 高频率 集成电路IC 芯片

MBR0540T1G ON Semiconductor:超低VF肖特基二极管,重新定义电源效率!🔥 一、产品简介 MBR0540T1G 是安森美(ON Semiconductor)推出的0.5A/40V肖特基势垒二极管,采用专利沟槽结构,专…

windows内核研究(软件调试-调试事件采集)

软件调试调试事件采集前面有说到在调试器和被调试之间会创建一个_DEBUG_OBJECT对象来进行关联调试事件的种类 被调试进程会把一个个的调试事件写到_DEBUG_OBJECT中的一个成员链表中,调试器就通过它们建立的 _DEBUG_OBJECT调试对象获取调式事件,但并不是进…

Web开发-PHP应用组件框架前端模版渲染三方插件富文本编辑器CVE审计

类别组件/框架说明[Web框架]Laravel现代化、功能全面的框架,适合大多数Web应用。Symfony高度模块化、功能强大的框架,适合复杂应用。CodeIgniter轻量级框架,适合快速开发。Zend Framework (Laminas)企业级框架,适合大规模应用&…

Spring Boot Actuator 保姆级教程

1. 引言 Spring Boot Actuator 是一个功能强大的监控工具,能够帮助开发者监控和管理应用的运行状态。通过 Actuator,我们可以轻松获取应用的健康状况、配置信息、性能指标等。本文将一步步引导你如何配置和使用 Actuator,以及如何通过它来监控…

使用 whisper, 音频分割, 初步尝试,切割为小块,效果还不错 1

对于一首歌而言,如何断句?即,一个 mp4 或是 mp3 文件,或是一段录音, 如何使用程序,或是 ai 来断句。分割为一句一句的片段??如果人工来分割,一般是使用 capcut 之类的剪辑软件。但是效率太慢了。所以我想能否设计一个简洁的,自动的程序来处理。这种事情,专业的名称…

AD2S1210的DOS LOT含义

一、​​信号质量监控类寄存器​​​​LOT阈值(0x88)​​​​作用​​:设定信号丢失(Loss of Signal)的判定门槛。​​场景​​:当正弦或余弦输入信号幅值低于此值时,芯片认为信号丢失&#xff…

Au速成班-多轨编辑流程

基础编辑工作流,包含文件导入,导出,音量调节,部分效果添加。 创建多轨会话 设置工程文件名称、文件位置、采样率、位深度、主控等。 界面管理 ,界面说明详细可看 Au速成班-基础篇_au界面介绍-CSDN博客 音量调节点击…

Rust实现GPU驱动的2D渲染引擎

当传统CPU渲染遭遇性能瓶颈时,GPU驱动的架构正在革新2D图形领域。本文将深入解析用Rust编写的​​完全GPU驱动的2D渲染引擎Vello​​,揭秘其如何通过并行计算实现丝滑渲染。 一、GPU Driven革命:为何是Vello? 传统渲染的瓶颈 传…

【ELasticsearch】温、冷数据节点能是同一个节点吗

温、冷数据节点能是同一个节点吗1.节点角色与分层存储原理2.一个节点能否同时是 “温” 和 “冷” 节点 ?3.为什么通常不是最佳实践 ?4.可能的适用场景(非常有限)5.结论在 Elasticsearch 中,理论上,一个物理…

报错:selenium.common.exceptions.ElementNotInteractableException: Message

针对该错误,以下是分步解决方案: 1. 显式等待确保元素可交互 from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC# 等待元素可点…

sqli-labs:Less-10关卡详细解析

1. 思路🚀 本关的SQL语句为: $id ".$id."; $sql"SELECT * FROM users WHERE id$id LIMIT 0,1";注入类型:字符串型(双引号包裹)提示:参数id需以"闭合 php回显输出语句的代码如…

imx6ull-驱动开发篇5——新字符设备驱动实验

目录 前言 新字符设备驱动原理 申请设备号 注册设备号 释放设备号 注册方法 字符设备结构cdev cdev_init 函数 cdev_add 函数 cdev_del 函数 自动创建设备节点 mdev 机制 类创建函数 类删除函数 创建设备函数 删除设备函数 设置文件私有数据 实验程序编写 l…

2025年最新SCI-灰熊增脂优化器(Grizzly Bear Fat Increase, GBF)-附完整Matlab免费代码

1、简介 本文介绍了一种新的受自然启发的优化算法,称为灰熊增脂优化器(GBFIO)。GBFIO算法模仿灰熊积累体脂为过冬做准备的自然行为,利用它们的狩猎、捕鱼和吃草、蜂蜜等策略。因此,GBFIO算法建模并考虑了三个数学步骤来…

Python爬虫02_Requests实战网页采集器

一、Request请求伪装解析 #UA:User-Agent(请求载体身份标识) #UA检测:门户网站的服务器会检测对应请求的载体身份标识,如果检测到请求的载体身份呢标识为某一款浏览器,说明该请求是一个正常的请求&#xff…

vue+elementui实现问卷调查配置可单选、多选、解答

效果&#xff1a;<template> <div><el-form :inline"true" :model"form" :rules"rules" ref"ruleForm"> <el-tabs type"border-card" v-model"cardType"><el-tab-pane name"1&qu…

Docker初学者需要了解的几个知识点(三):Docker引擎与Docker Desktop

Docker引擎与Docker Desktop简单说&#xff1a;Docker 引擎是干活的 “核心工具”&#xff0c;负责实际创建、运行容器&#xff0c;就像汽车的发动机&#xff0c;没它跑不起来。Docker Desktop是个 “套装软件”&#xff0c;它把 Docker 引擎打包进去了&#xff0c;还加了图形化…

Python将Word转换为Excel

现有大量的Word文档&#xff0c;每个文档中有大量的表格&#xff0c;需要将其转换为Excel。 Python处理源码 # 需要安装pip install xlsxwriter import pandas as pd from docx import Document from pathlib import Path from datetime import datetimedef process_docx(filep…