1 概述
1.1 简述
Step1X-Edit:一个在各种真实用户指令下表现出现的统一图像编辑模型。
Step1X-Edit,其性能可与 GPT-4o 和 Gemini2 Flash 等闭源模型相媲美。更具体地说,我们采用了多模态LLM 来处理参考图像和用户的编辑指令。我们提取了潜在嵌入,并将其与扩散图像解码器相结合,从而获得目标图像。为了训练模型,我们建立了一个数据生成管道,以生成高质量的数据集。为了进行评估,我们开发了 GEdit-Bench,这是一种植根于真实世界用户指令的新型基准。在 GEdit-Bench 上的实验结果表明,Step1X-Edit 的性能大大优于现有的开源基线,并接近领先的专有模型,从而为图像编辑领域做出了重大贡献。更多的技术信息可以参考https://arxiv.org/abs/2504.17761
模型链接:https://modelers.cn/models/StepFun/Step1X-Edit-npu
2 环境准备
2.1 获取CANN安装包&环境准备
版本支持列表
软件包 | 版本 |
---|---|
CANN | 8.0.0 |
PTA | 6.0.0 |
HDK | 24.1.0 |
pytorch | 2.3.1 |
Python | 3.11 |
2.2 Pytorch & CANN安装
- Pytorch & Ascend Extension for PyTorch安装(https://www.hiascend.com/document/detail/zh/Pytorch/600/configandinstg/instg/insg_0001.html)》
以下是python3.11,pytorch2.3.1,PTA插件版本6.0.0,系统架构是AArch64,CANN版本是8.0.0的安装信息:
# 下载PyTorch安装包
wget https://download.pytorch.org/whl/cpu/torch-2.3.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
# 下载torch_npu插件包
wget https://gitee.com/ascend/pytorch/releases/download/v6.0.0-pytorch2.3.1/torch_npu-2.3.1.post4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
# 安装命令
pip3 install torch-2.3.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
pip3 install torch_npu-2.3.1.post4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
- 软件包下载Atlas 800I A2
- CANN包安装
以下是CANN包中需要安装的run包信息:
# 增加软件包可执行权限,{version}表示软件版本号,{arch}表示CPU架构,{soc}表示昇腾AI处理器的版本。
chmod +x ./Ascend-cann-toolkit_{version}_linux-{arch}.run
chmod +x ./Ascend-cann-kernels-{soc}_{version}_linux.run
# 校验软件包安装文件的一致性和完整性
./Ascend-cann-toolkit_{version}_linux-{arch}.run --check
./Ascend-cann-kernels-{soc}_{version}_linux.run --check
# 安装
./Ascend-cann-toolkit_{version}_linux-{arch}.run --install
./Ascend-cann-kernels-{soc}_{version}_linux.run --install# 设置环境变量
source /usr/local/Ascend/ascend-toolkit/set_env.sh
2.3 依赖包安装
由于NPU下当前对Triton的inductor后端支持并不完备,请注释requirements.txt中的liger_kernel依赖信息,具体如下:
liger_kernel -> # liger_kernel
然后执行如下命令安装依赖:
pip install -r requirements.txt
注意:NPU上有单独的flash_attn算子实现,可以不用安装flash_attn库。
3 模型下载
- Huggingface
模型 | 链接 |
---|---|
Step1X-Edit | 🤗huggingface |
- 魔乐社区
模型 | 链接 |
---|---|
Step1X-Edit | modelers |
4 执行推理
-
获取
Step1X-Edit
的源码
git clone https://github.com/stepfun-ai/Step1X-Edit.git
-
修改
scripts/run_examples.sh
种的model_path
参数的值为模型下载的路径。
执行如下命令进行推理,如下
bash scripts/run_examples.sh
执行成功后,会在当前目录下生成2个文件夹,分别是output_cn和output_en。对应examples
目录下2种prompt(中文和英文)。结果如下:
Prompt(中文):给这个女生的脖子上戴一个带有红宝石的吊坠
Prompt(英文):Change the outerwear to be made of top-grain calfskin.
5 FAQ
5.1 问题1:rms_norm
Traceback (most recent call last):File "/home/Step1X-Edit/Step1X-Edit/inference.py", line 23, in <module>from modules.model_edit import Step1XParams, Step1XEditFile "/home/Step1X-Edit/Step1X-Edit/modules/model_edit.py", line 8, in <module>from .connector_edit import Qwen2ConnectorFile "/home/Step1X-Edit/Step1X-Edit/modules/connector_edit.py", line 8, in <module>from .layers import MLP, TextProjection, TimestepEmbedder, apply_gate, attentionFile "/home/Step1X-Edit/Step1X-Edit/modules/layers.py", line 27, in <module>from liger_kernel.ops.rms_norm import LigerRMSNormFunction
ModuleNotFoundError: No module named 'liger_kernel'
解决:
替换掉liger_kernel,使用npu版本的npu_rms_norm
RmsNorm & RmsNormGrad-融合算子替换-NPU亲和适配优化-性能调优方法-性能调优-PyTorch 训练模型迁移调优指南-模型开发-Ascend Extension for PyTorch6.0.0开发文档-昇腾社区
5.2 问题2:liger_kernel
File "/usr/local/lib/python3.11/site-packages/transformers/configuration_utils.py", line 594, in get_config_dictconfig_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^File "/usr/local/lib/python3.11/site-packages/transformers/configuration_utils.py", line 653, in _get_config_dictresolved_config_file = cached_file(^^^^^^^^^^^^File "/usr/local/lib/python3.11/site-packages/transformers/utils/hub.py", line 385, in cached_fileraise EnvironmentError(
OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like Qwen/Qwen2.5-VL-7B-Instruct is not the path to a directory containing a file named config.json.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'
ModuleNotFoundError: No module named 'liger_kernel'
解决:
liger_kernel是triton的模块,当前npu下并未实现,需要去掉,算子直接用NPU的acclNN相关算子。
5.3 问题3: flash_atten
File "/usr/local/lib/python3.11/site-packages/transformers/modeling_utils.py", line 4179, in from_pretrainedconfig = cls._autoset_attn_implementation(^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^File "/usr/local/lib/python3.11/site-packages/transformers/modeling_utils.py", line 1575, in _autoset_attn_implementationcls._check_and_enable_flash_attn_2(File "/usr/local/lib/python3.11/site-packages/transformers/modeling_utils.py", line 1710, in _check_and_enable_flash_attn_2raise ImportError(f"{preface} the package flash_attn seems to be not installed. {install_message}")
ImportError: FlashAttention2 has been toggled on, but it cannot be used due to the following error: the package flash_attn seems to be not installed. Please refer to the documentation of https://huggingface.co/docs/transformers/perf_infer_gpu_one#flashattention-2 to install Flash Attention 2.
[ERROR] 2025-04-25-14:53:48 (PID:7549, Device:0, RankID:-1) ERR99999 UNKNOWN application exception
+ python inference.py --input_dir ./examples --model_path /home/Step1X-Edit/weight/ --json_path ./examples/prompt_cn.json --output_dir ./output_cn --seed 1234 --size_level 1024
解决:
flash_atten是gpu相关的,没有对应安装包。修改模型导入时候使用attn_implementation参数。
self.model = Qwen2_5_VLForConditionalGeneration.from_pretrained(model_path,torch_dtype=dtype,
- attn_implementation="flash_attention_2",
+ #attn_implementation=" eager",).to(torch.cuda.current_device())
改用attn_implementation = “eager”。
5.4 问题4:torch.compile
File "/usr/local/lib64/python3.11/site-packages/torch/_inductor/graph.py", line 1307, in compile_to_fnreturn self.compile_to_module().call^^^^^^^^^^^^^^^^^^^^^^^^File "/usr/local/lib64/python3.11/site-packages/torch/_dynamo/utils.py", line 262, in time_wrapperr = func(*args, **kwargs)^^^^^^^^^^^^^^^^^^^^^File "/usr/local/lib64/python3.11/site-packages/torch/_inductor/graph.py", line 1250, in compile_to_moduleself.codegen_with_cpp_wrapper() if self.cpp_wrapper else self.codegen()^^^^^^^^^^^^^^File "/usr/local/lib64/python3.11/site-packages/torch/_inductor/graph.py", line 1203, in codegenself.init_wrapper_code()File "/usr/local/lib64/python3.11/site-packages/torch/_inductor/graph.py", line 1134, in init_wrapper_codewrapper_code_gen_cls is not None
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
AssertionError: Device npu not supportedSet TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
解决:
去掉torch,compile的特性。当前npu不支持inductor。
5.5 问题5:flash_attn_func
File "/usr/local/lib64/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_implreturn self._call_impl(*args, **kwargs)^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^File "/usr/local/lib64/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_implreturn forward_call(*args, **kwargs)^^^^^^^^^^^^^^^^^^^^^^^^^^^^^File "/home/Step1X-Edit/Step1X-Edit/modules/layers.py", line 557, in forwardattn = attention_after_rope(q, k, v, pe=pe)^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^File "/home/Step1X-Edit/Step1X-Edit/modules/layers.py", line 370, in attention_after_ropex = attention(q, k, v, mode="flash")^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^File "/home/Step1X-Edit/Step1X-Edit/modules/attention.py", line 82, in attentionassert flash_attn_func is not None, "flash_attn_func未定义"^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: flash_attn_func未定义
[ERROR] 2025-04-25-15:20:18 (PID:20394, Device:0, RankID:-1) ERR99999 UNKNOWN application exception
答:使用torch的mode,走acllnn的flashattention。
5.6 问题6:精度问题
出来的结果图片,完全是乱码,像是马赛克。
解决:
由于torch.nn.functional.scaled_dot_product_attention导致的精度问题,需要改成torch_npu.npu_fusion_attention的接口,并修改对应的接口参数。
参考:
FlashAttentionScore-融合算子替换-NPU亲和适配优化-性能调优方法-性能调优-PyTorch 训练模型迁移调优指南-模型开发-Ascend Extension for PyTorch6.0.0开发文档-昇腾社区