FastVLM-0.5B 模型解析

模型介绍

FastVLM(Fast Vision-Language Model)是苹果团队于2025年在CVPR会议上提出的高效视觉语言模型,专为移动设备(如iPhone、iPad、Mac)优化,核心创新在于通过全新设计的 FastViTHD混合视觉编码器 解决了传统视觉语言模型(VLM)在高分辨率图像处理中的“编码延迟高、视觉token冗余”等痛点,实现了速度与性能的双重突破

FastViTHD 采用“卷积神经网络(CNN)+ Transformer”的混合架构,兼顾局部特征提取与全局建模能力,通过三大设计大幅降低计算复杂度:

  1. 动态分辨率调整:基于特征图信息熵动态分配计算资源,对图像关键区域(如文字、物体)分配高分辨率,背景区域低分辨率,在ImageNet-1K上减少47%计算量;
  2. 层级 token 压缩:将传统VLM的1536个视觉token压缩至576个(减少62.5%),大幅降低语言模型的处理负担;
  3. 轻量卷积嵌入:用轻量卷积层(仅增加0.3%参数)替代传统ViT的patch embedding,更快提取局部特征。

模型性能

FastVLM在“首token生成时间(TTFT)”和“模型轻量化”上表现突出:

  • 速度对比
    • 最小变体(FastVLM-0.5B)的TTFT比LLaVA-OneVision-0.5B快85倍,比Cambrian-1-8B(基于Qwen2-7B)快7.9倍
    • 在1152×1152高分辨率图像上,整体性能媲美竞品,但视觉编码器体积小3.4倍
  • 硬件适配
    • 针对苹果A18芯片和M2/M4处理器优化矩阵运算,支持CoreML集成,在iPad Pro M2上实现60 FPS连续对话
    • 动态INT8量化后内存占用减少40%,保持98%精度,0.5B模型的App仅占1.8GB内存。
      在这里插入图片描述
      在这里插入图片描述

模型加载

import torch
from PIL import Image
from modelscope import AutoTokenizer, AutoModelForCausalLMMID = "apple/FastVLM-0.5B"
IMAGE_TOKEN_INDEX = -200  # what the model code looks for# Load
tok = AutoTokenizer.from_pretrained(MID, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(MID,torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32,device_map="auto",trust_remote_code=True,
)

模型配置

tok 
Qwen2TokenizerFast(name_or_path='/home/six/.cache/modelscope/hub/models/apple/FastVLM-0___5B', vocab_size=151643, model_max_length=8192, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
}
)
model.config
LlavaConfig {"architectures": ["LlavaQwen2ForCausalLM"],"attention_dropout": 0.0,"auto_map": {"AutoConfig": "llava_qwen.LlavaConfig","AutoModelForCausalLM": "llava_qwen.LlavaQwen2ForCausalLM"},"bos_token_id": 151643,"dtype": "float16","eos_token_id": 151645,"freeze_mm_mlp_adapter": false,"hidden_act": "silu","hidden_size": 896,"image_aspect_ratio": "pad","image_grid_pinpoints": null,"initializer_range": 0.02,"intermediate_size": 4864,"layer_types": ["full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention"],"max_position_embeddings": 32768,"max_window_layers": 24,"mm_hidden_size": 3072,"mm_patch_merge_type": "flat","mm_projector_lr": null,"mm_projector_type": "mlp2x_gelu","mm_use_im_patch_token": false,"mm_use_im_start_end": false,"mm_vision_select_feature": "patch","mm_vision_select_layer": -2,"mm_vision_tower": "mobileclip_l_1024","model_type": "llava_qwen2","num_attention_heads": 14,"num_hidden_layers": 24,"num_key_value_heads": 2,"rms_norm_eps": 1e-06,"rope_scaling": null,"rope_theta": 1000000.0,"sliding_window": null,"tie_word_embeddings": true,"tokenizer_model_max_length": 8192,"tokenizer_padding_side": "right","transformers_version": "4.56.0","tune_mm_mlp_adapter": false,"unfreeze_mm_vision_tower": true,"use_cache": true,"use_mm_proj": true,"use_sliding_window": false,"vocab_size": 151936
}

模型结构

model
LlavaQwen2ForCausalLM((model): LlavaQwen2Model((embed_tokens): Embedding(151936, 896)(layers): ModuleList((0-23): 24 x Qwen2DecoderLayer((self_attn): Qwen2Attention((q_proj): Linear(in_features=896, out_features=896, bias=True)(k_proj): Linear(in_features=896, out_features=128, bias=True)(v_proj): Linear(in_features=896, out_features=128, bias=True)(o_proj): Linear(in_features=896, out_features=896, bias=False))(mlp): Qwen2MLP((gate_proj): Linear(in_features=896, out_features=4864, bias=False)(up_proj): Linear(in_features=896, out_features=4864, bias=False)(down_proj): Linear(in_features=4864, out_features=896, bias=False)(act_fn): SiLU())(input_layernorm): Qwen2RMSNorm((896,), eps=1e-06)(post_attention_layernorm): Qwen2RMSNorm((896,), eps=1e-06)))(norm): Qwen2RMSNorm((896,), eps=1e-06)(rotary_emb): Qwen2RotaryEmbedding()(vision_tower): MobileCLIPVisionTower((vision_tower): MCi((model): FastViT((patch_embed): Sequential((0): MobileOneBlock((se): Identity()(activation): GELU(approximate='none')(reparam_conv): Conv2d(3, 96, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)))(1): MobileOneBlock((se): Identity()(activation): GELU(approximate='none')(reparam_conv): Conv2d(96, 96, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=96))(2): MobileOneBlock((se): Identity()(activation): GELU(approximate='none')(reparam_conv): Conv2d(96, 96, kernel_size=(1, 1), stride=(1, 1))))(network): ModuleList((0): Sequential((0-1): RepMixerBlock((token_mixer): RepMixer((reparam_conv): Conv2d(96, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=96))(convffn): ConvFFN((conv): Sequential((conv): Conv2d(96, 96, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=96, bias=False)(bn): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))(fc1): Conv2d(96, 384, kernel_size=(1, 1), stride=(1, 1))(act): GELU(approximate='none')(fc2): Conv2d(384, 96, kernel_size=(1, 1), stride=(1, 1))(drop): Dropout(p=0.0, inplace=False))(drop_path): Identity()))(1): PatchEmbed((proj): Sequential((0): ReparamLargeKernelConv((activation): GELU(approximate='none')(se): Identity()(lkb_reparam): Conv2d(96, 192, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), groups=96))(1): MobileOneBlock((se): Identity()(activation): GELU(approximate='none')(reparam_conv): Conv2d(192, 192, kernel_size=(1, 1), stride=(1, 1)))))(2): Sequential((0-11): RepMixerBlock((token_mixer): RepMixer((reparam_conv): Conv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=192))(convffn): ConvFFN((conv): Sequential((conv): Conv2d(192, 192, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=192, bias=False)(bn): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))(fc1): Conv2d(192, 768, kernel_size=(1, 1), stride=(1, 1))(act): GELU(approximate='none')(fc2): Conv2d(768, 192, kernel_size=(1, 1), stride=(1, 1))(drop): Dropout(p=0.0, inplace=False))(drop_path): Identity()))(3): PatchEmbed((proj): Sequential((0): ReparamLargeKernelConv((activation): GELU(approximate='none')(se): Identity()(lkb_reparam): Conv2d(192, 384, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), groups=192))(1): MobileOneBlock((se): Identity()(activation): GELU(approximate='none')(reparam_conv): Conv2d(384, 384, kernel_size=(1, 1), stride=(1, 1)))))(4): Sequential((0-23): RepMixerBlock((token_mixer): RepMixer((reparam_conv): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=384))(convffn): ConvFFN((conv): Sequential((conv): Conv2d(384, 384, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=384, bias=False)(bn): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))(fc1): Conv2d(384, 1536, kernel_size=(1, 1), stride=(1, 1))(act): GELU(approximate='none')(fc2): Conv2d(1536, 384, kernel_size=(1, 1), stride=(1, 1))(drop): Dropout(p=0.0, inplace=False))(drop_path): Identity())(5): PatchEmbed((proj): Sequential((0): ReparamLargeKernelConv((activation): GELU(approximate='none')(se): Identity()(lkb_reparam): Conv2d(384, 768, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), groups=384))(1): MobileOneBlock((se): Identity()(activation): GELU(approximate='none')(reparam_conv): Conv2d(768, 768, kernel_size=(1, 1), stride=(1, 1)))))(6): RepCPE((reparam_conv): Conv2d(768, 768, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=768))(7): Sequential((0-3): AttentionBlock((norm): LayerNormChannel()(token_mixer): MHSA((qkv): Linear(in_features=768, out_features=2304, bias=False)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=768, out_features=768, bias=True)(proj_drop): Dropout(p=0.0, inplace=False))(convffn): ConvFFN((conv): Sequential((conv): Conv2d(768, 768, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=768, bias=False)(bn): BatchNorm2d(768, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))(fc1): Conv2d(768, 3072, kernel_size=(1, 1), stride=(1, 1))(act): GELU(approximate='none')(fc2): Conv2d(3072, 768, kernel_size=(1, 1), stride=(1, 1))(drop): Dropout(p=0.0, inplace=False))(drop_path): Identity())(8): PatchEmbed((proj): Sequential((0): ReparamLargeKernelConv((activation): GELU(approximate='none')(se): Identity()(lkb_reparam): Conv2d(768, 1536, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), groups=768))(1): MobileOneBlock((se): Identity()(activation): GELU(approximate='none')(reparam_conv): Conv2d(1536, 1536, kernel_size=(1, 1), stride=(1, 1)))))(9): RepCPE((reparam_conv): Conv2d(1536, 1536, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=1536))(10): Sequential((0-1): AttentionBlock((norm): LayerNormChannel()(token_mixer): MHSA((qkv): Linear(in_features=1536, out_features=4608, bias=False)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=1536, out_features=1536, bias=True)(proj_drop): Dropout(p=0.0, inplace=False))(convffn): ConvFFN((conv): Sequential((conv): Conv2d(1536, 1536, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=1536, bias=False)(bn): BatchNorm2d(1536, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))(fc1): Conv2d(1536, 6144, kernel_size=(1, 1), stride=(1, 1))(act): GELU(approximate='none')(fc2): Conv2d(6144, 1536, kernel_size=(1, 1), stride=(1, 1))(drop): Dropout(p=0.0, inplace=False))(drop_path): Identity()))(conv_exp): MobileOneBlock((se): SEBlock((reduce): Conv2d(3072, 192, kernel_size=(1, 1), stride=(1, 1))(expand): Conv2d(192, 3072, kernel_size=(1, 1), stride=(1, 1)))(activation): GELU(approximate='none')(reparam_conv): Conv2d(1536, 3072, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1536))(head): GlobalPool2D())))(mm_projector): Sequential((0): Linear(in_features=3072, out_features=896, bias=True)(1): GELU(approximate='none')(2): Linear(in_features=896, out_features=896, bias=True)))(lm_head): Linear(in_features=896, out_features=151936, bias=False)
)

模型调用

在这里插入图片描述

# Build chat -> render to string (not tokens) so we can place <image> exactly
messages = [{"role": "user", "content": "<image>\nDescribe this image in detail."}
]
rendered = tok.apply_chat_template(messages, add_generation_prompt=True, tokenize=False
)pre, post = rendered.split("<image>", 1)# Tokenize the text *around* the image token (no extra specials!)
pre_ids  = tok(pre,  return_tensors="pt", add_special_tokens=False).input_ids
post_ids = tok(post, return_tensors="pt", add_special_tokens=False).input_ids# Splice in the IMAGE token id (-200) at the placeholder position
img_tok = torch.tensor([[IMAGE_TOKEN_INDEX]], dtype=pre_ids.dtype)
input_ids = torch.cat([pre_ids, img_tok, post_ids], dim=1).to(model.device)
attention_mask = torch.ones_like(input_ids, device=model.device)# Preprocess image via the model's own processor
img = Image.open("image.png").convert("RGB")
px = model.get_vision_tower().image_processor(images=img, return_tensors="pt")["pixel_values"]
px = px.to(model.device, dtype=model.dtype)# Generate
with torch.no_grad():out = model.generate(inputs=input_ids,attention_mask=attention_mask,images=px,max_new_tokens=1024,)print(tok.decode(out[0], skip_special_tokens=True))
### Image DescriptionThe image is a photograph of handwritten notes. It is formatted in a columnar, portrait mode. The notes are written in a somewhat cursive and formal style with regular spacing between lines. The content of the notes is not followed by a specific topic or question, but rather appears to be a detailed narrative or reflection.#### Breakdown of the Content:1. **Title or Notation**:- The first line reads "Remind the both part of realistic history and interpret...". The precise context or terms suggest it might be a summary or introduction to a theoretical discussion, possibly related to historical real-world interpretations or an analytical piece on it.2. **Paragraph Structure**:- The text proceeds sequentially down the page, which looks like a detailed narrative or argument. Each paragraph begins with a header, followed by an initial statement or heading.3. **Content Analysis**:- **First Paragraph:** - There appears to be an initial statement emphasizing the comparison between realism, perhaps discussing historical periods such as "the dark" and the "internet buying and Internet buying of things". Parts of the heading might indicate a topic related to real-world analysis or comparison.- **Second Paragraph:**- The language becomes more descriptive, discussing the growth of "internet buying and Internet buying of things". Timeframes, statistical data, and percentages hint at a trend or progression being discussed, which indicates it could be a case study or comparative study.- **Third Paragraph:**- This part of the document mentions "four years," suggesting it is about a four-year period of observation or change within the context it refers to.- **Final Paragraph:**- It concludes with a concise conclusion or observation, indicating that the results of the previous analysis provided are valid or noteworthy.### Knowledge Integration:1. **Historical Realism**: Historically, realism is a philosophical approach that posits that we have all knowledge and the nature of reality. This perspective often frames history as an objective recounting of past events without subjective interpretation. Reputations and perceptions have naturally developed over time, often evolving in different ways due to various influences.2. **Internet Buying of Things**: The term "internet buying of things" suggests a reference to purchasing trends using computer systems, which are pivotal in today's digital economy. The reference to "2019" could be indicating a specific year's perspective, possibly within a historical context for analysis.### Chain of Thought:Given the structured format and the reference to "four years," it is plausible that the notes might be part of an analytical and reflective discussion, perhaps comparing old historical realist perspectives of the same historical period with contemporary digital trends, such as internet buying practices.This comprehensive description should enable a pure text model to effectively parse and answer questions related to the content or structure of the handwritten notes captured in the image.
---### AnalysisThe handwritten notes appear to be an analytical and reflective piece addressing historical realist interpretations and predictions in the context of online buying behaviors. The notes discuss the comparative development of historical realist views about historical periods and their evolution over time. They reference significant dates and percentages, likely from 2019. The notes conclude by noting that there is a direct comparison with current trends, specifically regarding "internet buying" as noted in the 2019 context. The narrative suggests a methodical approach, reflective of a theoretical or analytical examination of past and present trends, possibly using historical realist techniques to contextualize contemporary practices.The text you provided can be directly converted into a markdown table for better clarity and readability:| Column  | Content |
|---------|-----------|
| 1       | Remind the both part of realistic history and interpret...
| 2       | A comparison between historical periods such as "the dark" and the "internet buying and Internet buying of things".
| 3       | Timeframes of statistical data showing 2019.
| 4       | An example of a year with an increase of 24.5%.
| 5       | An increase of 233 with the year 2023 and a trend of 4303 of years.
| 6       | An additional detail suggesting the possibility of an observer's friends change.
| 7       | Likely a conclusion that the results of previous analysis of realist are valid.The markdown format simplifies the content and makes it formatted for further reading and

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。
如若转载,请注明出处:http://www.pswp.cn/pingmian/95534.shtml
繁体地址,请注明出处:http://hk.pswp.cn/pingmian/95534.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

集成学习 | MATLAB基于CNN-LSTM-Adaboost多输入单输出回归预测

集成学习 | MATLAB基于CNN-LSTM-Adaboost多输入单输出回归预测 一、主要功能 该代码使用 CNN 提取特征,LSTM 捕捉时序依赖,并通过 AdaBoost 集成多个弱学习器(每个弱学习器是一个 CNN-LSTM 网络),最终组合成一个强预测器,用于回归预测任务。代码完成了从数据预处理、模型…

关于Homebrew:Mac快速安装Homebrew

关于macOS 安装HomebrewHomebrewHomebrew介绍Homebrew 官网地址Homebrew 能安装什么&#xff1f;Mac上安装Homebrew主要步骤&#xff1a;打开终端&#xff0c;执行官网安装脚本注意遇到问题①&#xff1a;脚本在克隆 Homebrew 核心仓库时&#xff0c;​​无法连接 GitHub​​&a…

【前端】使用Vercel部署前端项目,api转发到后端服务器

文章目录Vercel是什么概要Vercel部署分为两种方案&#xff1a;一、使用GitHub构建部署二、通过 Vercel CLI 上传本地构建资源注意事项转发API到后端小结Vercel是什么 Vercel是一款专为前端开发者打造的云部署平台&#xff0c;它支持一键部署静态网站、AI工具和现代Web应用。Ve…

滚珠导轨在工业制造领域如何实现高效运行?

在工业制造领域中滚珠导轨凭借其高精度、低摩擦、高刚性等特点&#xff0c;被广泛应用于多种设备和场景&#xff0c;并在设备性能中起着关键作用&#xff0c;以下是具体应用&#xff1a;加工中心&#xff1a;滚珠导轨用于加工中心的工作台和主轴箱等部件的移动&#xff0c;能保…

大基座模型与 Scaling Law:AI 时代的逻辑与困境

一、背景&#xff1a;为什么大模型一定要“做大”&#xff1f; 在人工智能的发展历程中&#xff0c;有一个不容忽视的“铁律”&#xff1a;更大的模型往往意味着更强的性能。从 GPT-2 到 GPT-4&#xff0c;从 BERT 到 PaLM&#xff0c;从 LLaMA 到 Claude&#xff0c;每一代的…

内网的应用系统间通信需要HTTPS吗

内网是否需要 HTTPS&#xff1f; 虽然内网通常被视为“相对安全”的环境&#xff0c;但仍需根据具体情况决定是否使用 HTTPS&#xff0c;原因如下&#xff1a; 内部威胁风险 ● 内网可能面临内部人员攻击、横向渗透&#xff08;如黑客突破边界后在内网扫描&#xff09;、设备…

6.ImGui-颜色(色板)

免责声明&#xff1a;内容仅供学习参考&#xff0c;请合法利用知识&#xff0c;禁止进行违法犯罪活动&#xff01; 本次游戏没法给 内容参考于&#xff1a;微尘网络安全 上一个内容&#xff1a;5.ImGui-按钮 IMGui中表示颜色的的结构体 ImVec4和ImU32&#xff0c;如下图红框…

【C++】Vector完全指南:动态数组高效使用

0. 官方文档 vector 1. vector介绍 Vector 简单来说就是顺序表&#xff0c;是一个可以动态增长的数组。 vector是表示可变大小数组的序列容器。 就像数组一样&#xff0c;vector也采用的连续存储空间来存储元素。也就是意味着可以采用下标对vector的元素进行访问&#xff0c…

关于无法导入父路径的问题

问题重现 有下面的代码&#xff1a; from ..utils import Config,set_DATA_PATH DATA_PATH set_DATA_PATH()报错如下&#xff1a;from ..utils import Config,set_DATA_PATH ImportError: attempted relative import beyond top-level package解决方案 #获取当前脚本所在目录的…

C/C++包管理工具:Conan

Conan是一个专为C/C设计的开源、去中心化、跨平台的包管理器&#xff0c;致力于简化依赖管理和二进制分发流程。Conan基于Python进行开发&#xff0c;支持与主流的构建系统集成&#xff0c;提供了强大的跨平台和交叉编译能力。通过Conan&#xff0c;开发者可以高效的创建、共享…

核心高并发复杂接口重构方案

核心高并发复杂接口重构方案 一、重构目标与原则 核心目标 提升接口性能:降低响应时间,提高吞吐量,降低资源使用 增强可维护性:拆解复杂逻辑,模块化设计,降低后续迭代成本 保障稳定性:通过架构优化和灰度策略,确保重构过程无服务中断 提升扩展性:设计灵活的扩展点,…

C++容器内存布局与性能优化指南

C容器的内存布局和缓存友好性对程序性能有决定性影响。理解这些底层机制&#xff0c;能帮你写出更高效的代码。 一、容器内存布局概述 不同容器在内存中的组织方式差异显著&#xff0c;这直接影响了它们的访问效率和适用场景。容器类型内存布局特点元数据位置元素存储位置std::…

Beautiful.ai:AI辅助PPT工具高效搞定排版,告别熬夜做汇报烦恼

你是不是每次做 PPT 都头大&#xff1f;找模板、调排版、凑内容&#xff0c;熬大半夜出来的东西还没眼看&#xff1f;尤其是遇到 “明天就要交汇报” 的紧急情况&#xff0c;打开 PPT 软件半天&#xff0c;光标在空白页上晃来晃去&#xff0c;连标题都想不出来 —— 这种抓瞎的…

阿里云携手MiniMax构建云原生数仓最佳实践:大模型时代的 Data + AI 数据处理平台

MiniMax简介MiniMax是全球领先的通用人工智能科技公司。自2022年初成立以来&#xff0c;MiniMax以“与所有人共创智能”为使命&#xff0c;致力于推动人工智能科技前沿发展&#xff0c;实现通用人工智能(AGI&#xff09;。MiniMax自主研发了一系列多模态通用大模型&#xff0c;…

一键生成PPT的AI工具排名:2025年能读懂你思路的AI演示工具

人工智能正在重塑PPT制作方式&#xff0c;让专业演示变得触手可及。随着人工智能技术的飞速发展&#xff0c;AI生成PPT工具已成为职场人士、学生和创作者提升效率的得力助手。这些工具通过智能算法&#xff0c;能够快速将文本、数据或创意转化为结构化、视觉化的演示文稿&#…

数据库基础知识——聚合函数、分组查询

目录 一、聚合函数 1.1 count 1.1.1 统计整张表中所有记录的总条数 1.1.2 统计单列的数据 1.1.3 统计单列记录限制条件 1.2 sum 1.3 avg 1.4 max, min 二、group by 分组查询 2.1 语法 2.2 示例 2.3 having 一、聚合函数 常用的聚合函数 函数说明count ([distinc…

改 TDengine 数据库的时间写入限制

一 sql连数据库改 改 TDengine 数据库的时间写入限制 之前默认了可写入时间为一个月&#xff0c;调整为10年&#xff0c;方便测试&#xff1a; SHOW DATABASES;use wi; SELECT CONCAT(ALTER TABLE , table_name, KEEP 3650;) FROM information_schema.ins_tables WHERE db_…

数码视讯TR100-OTT-G1_国科GK6323_安卓9_广东联通原机修改-TTL烧录包-可救砖

数码视讯TR100-OTT-G1_国科GK6323_安卓9_广东联通原机修改-TTL烧录包-可救砖刷机教程数码视讯 TR100-G1 TTL 烧录刷机教程固件由广东联通 TR100-G1 28 原版修改&#xff0c;测试一切正常1、把刷机文件解压出 备用&#xff0c;盒子主板接好 TTL&#xff0c;不会接自行查找 TTl 接…

TVS防护静电二极管选型需要注意哪些参数?-ASIM阿赛姆

TVS防护静电二极管选型关键参数详解TVS(Transient Voltage Suppressor)二极管作为电路防护的核心器件&#xff0c;在电子设备静电防护(ESD)、浪涌保护等领域发挥着重要作用。本文将系统性地介绍TVS二极管选型过程中需要重点关注的参数指标&#xff0c;帮助工程师做出合理选择。…

项目经理为什么要有一张PMP®认证?

在项目管理日益成为企业核心竞争力的今天&#xff0c;PMP已成为项目经理职业发展的重要“通行证”。这张由美国项目管理协会&#xff08;PMI&#xff09;颁发的全球公认证书&#xff0c;不仅是专业能力的象征&#xff0c;更在职业竞争力、项目成功率、团队协作等多个维度为项目…