note
- gpt-oss模型代理能力:使用模型的原生功能进行函数调用、网页浏览(https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#browser)、Python 代码执行(https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#python) 和结构化输出。
- gpt-oss模型,原生 MXFP4 量化:模型使用原生 MXFP4 精度训练 MoE 层,使得 gpt-oss-120b 可以在单个 H100 GPU 上运行,而 gpt-oss-20b 模型可以在 16GB 内存内运行。
- GPT5模型在文本、网页开发、视觉、复杂提示词、编程、数学、创造成、长查询等方面,都是第一名。全面超越Gemini-2.5-pro、Grok4等一众竞品。
- GPT-5 是一个一体化系统,包含三个核心部分:一个智能高效的基础模型,可解答大多数问题;一个深度推理模型(即GPT-5思维模块),用于处理更复杂的难题;以及一个实时路由模块,能够基于对话类型、问题复杂度、工具需求及用户显式指令(如prompt含“仔细思考这个问题”)智能调度模型。
- Openai目前面向普通用户,GPT-5提供免费、plus和Pro三种模式。同时在API平台上,推出了GPT-5、GPT-5 nano、GPT-5 mini三种模型选择。
文章目录
- note
- 一、gpt-oss模型
- 1、gpt-oss-120b和gpt-oss-20b模型
- 2、gpt-oss模型特点
- 1、模型定位与开放策略
- 2、核心架构亮点与量化策略
- 3、模型微调训练
- 二、GPT5模型
- 1、模型效果
- 2、相关demo
- 3、GPT5的prompt
- Reference
一、gpt-oss模型
1、gpt-oss-120b和gpt-oss-20b模型
Openai开源两个模型:gpt-oss-120b,对标 o4-mini,117B 参数,5.1B 激活量,运行该模型,需要 80G 内存,单卡 H100 GPU 可运行。gpt-oss-20b,对标 o4-mini,21B 参数,3.6B 激活量,运行该模型,需要 16G 内存,单卡 4060 Ti 可运行。原生MXFP4量化,模型采用原生MXFP4精度训练MoE层。
关于部署,https://github.com/openai/gpt-oss,主页中写了多种不同方案,包括vllm, ollama、PyTorch / Triton / Metal、LM Studio等。https://gpt-oss.com/,可以直接体验openai开源的gpt-oss-120b 和 gpt-oss-20b
2、gpt-oss模型特点
2025 年 8 月 5 日,OpenAI 正式发布其自 GPT‑2 以来的首款开源权重模型系列——gpt‑oss‑120b 与 gpt‑oss‑20b
1、模型定位与开放策略
1、gpt‑oss‑120b (~116.8B 参数):与 OpenAI 自研的 o4‑mini 相当,可在单卡 NVIDIA H100(80GB)上运行,并以 Apache 2.0 协议免费开源 。
2、gpt‑oss‑20b (~20.9B 参数):性能接近 o3‑mini,可在仅 16GB 显存的消费级机器上运行,适配本地 PC 环境 。
2、核心架构亮点与量化策略
1、Mixture-of-Experts 架构:gpt‑oss‑120b 配备 128 个专家,gpt‑oss‑20b 则具有 32 个专家;每个 token 调度 top‑4 专家并使用 gated SwiGLU 激活函数 。
2、MXFP4 量化:对 MoE 权重采用 4.25-bit 量化;使得大模型在单卡可运行,小模型可在 16GB 环境中部署,显著降低推理资源门槛 。
3、长上下文支持:使用 YaRN 技术实现最高 131,072 token 上下文窗口,对结构化任务与复杂推理尤为有益 。
您可以根据任务需求调整三个级别的推理水平:
- 低: 适用于一般对话的快速响应。
- 中: 平衡速度与细节。
- 高: 深入且详细的分析。推理级别可以在系统提示中设置,例如,“Reasoning: high”。
3、模型微调训练
可以使用swift框架进行微调:
# 42GB
CUDA_VISIBLE_DEVICES=0 \
swift sft \--model openai-mirror/gpt-oss-20b \--train_type lora \--dataset 'AI-ModelScope/alpaca-gpt4-data-zh#500' \'AI-ModelScope/alpaca-gpt4-data-en#500' \'swift/self-cognition#500' \--torch_dtype bfloat16 \--num_train_epochs 1 \--per_device_train_batch_size 1 \--per_device_eval_batch_size 1 \--router_aux_loss_coef 1e-3 \--learning_rate 1e-4 \--lora_rank 8 \--lora_alpha 32 \--target_modules all-linear \--gradient_accumulation_steps 16 \--eval_steps 50 \--save_steps 50 \--save_total_limit 2 \--logging_steps 5 \--max_length 2048 \--output_dir output \--warmup_ratio 0.05 \--dataloader_num_workers 4 \--model_author swift \--model_name swift-robot
相关训练参数:
warmup_ratio
:学习率预热比例(在整个训练的前 5% 步数内线性从 0 增至 learning_rate),随后进入 线性衰减。5% 通常足够,若数据极少可以调大至 0.1;若训练步数很长(>10k)可保留 0.05。router_aux_loss_coef
:路由辅助损失系数(针对 Mixture‑of‑Experts / MoE 模型的路由平衡损失)
lora训练参数:
参数 | 解释 | 推荐范围 |
---|---|---|
–lora_rank | LoRA 中低秩矩阵的秩rank,决定适配层的参数规模。默认 8。 | 4‑16 常见;更大 rank 增加适配能力但显存也随之上升。 |
–lora_alpha | LoRA 中的缩放系数(α),通常设置为 rank × 4(这里 8×4=32),用于保持 LoRA 参数对原模型梯度的比例。 | 经验值:α = 4 × rank;如 rank=4 → α=16,rank=16 → α=64。 |
–target_modules | 要插入 LoRA 的模块。all-linear 表示把 LoRA 应用于 LoRA 模型中所有 Linear(全连接)层(包括投影层、FFN、中间层等)。也可以指定具体层名或正则表达式(如 q_proj, v_proj, v_proj)。 | 对大多数 LLM,all-linear 已足够;如果想只微调注意力投影可改成 q_proj,k_proj,v_proj。 |
相关训练经验:
场景 | 建议的改动 |
---|---|
显存不足 | 把 per_device_train_batch_size 降到1(已经是1),再把 gradient_accumulation_steps 增大(如 32-64),或把 torch_dtype 改成 float16 (如果 GPU 支持)。 |
收敛慢 | 把 learning_rate 调高到 2e-4 或 5e-4(注意观察 loss 曲线是否出现震荡),或延长 warmup (warmup_ratio=0.1 )。 |
验证不够频繁 | 把 eval_steps 缩小到 20,或把 logging_steps 设为1,及时捕捉训练过程异常。 |
想要更高质量的 LoRA | 增大 lora_rank (例如 16)并相应把 lora_alpha 设为 64;若显存仍足够,也可以把 target_modules 从 all-linear 改成只在 attention 投影上(q_proj,k_proj,v_proj ),这样参数更集中。 |
多卡训练 | 把 CUDA_VISIBLE_DEVICES=0,1,2,3 (或不写),并把 --tensor_parallel_size 参数(Swift 自动解析)设为 GPU 数量,例如 --tensor_parallel_size 4 ;此时 per_device_train_batch_size 仍然指每卡的大小,整体有效 batch = per_device_train_batch_size × gradient_accumulation_steps × num_gpus 。 |
想快速实验 | 把 --num_train_epochs 改成 0.1(只跑少量 steps),配合 --max_steps 200 (自行添加)来做 smoke test,确认 pipeline 正常后再正式跑。 |
使用混合精度 | --torch_dtype bfloat16 已经是混合精度;若 GPU 不支持 BF16,可改为 float16 ,并确保 torch.backends.cudnn.benchmark = True (Swift 默认开启)。 |
二、GPT5模型
1、模型效果
- GPT5模型在文本、网页开发、视觉、复杂提示词、编程、数学、创造成、长查询等方面,都是第一名。全面超越Gemini-2.5-pro、Grok4等一众竞品。
- GPT-5 是一个一体化系统,包含三个核心部分:一个智能高效的基础模型,可解答大多数问题;一个深度推理模型(即GPT-5思维模块),用于处理更复杂的难题;以及一个实时路由模块,能够基于对话类型、问题复杂度、工具需求及用户显式指令(如prompt含“仔细思考这个问题”)智能调度模型。
- Openai目前面向普通用户,GPT-5提供免费、plus和Pro三种模式。同时在API平台上,推出了GPT-5、GPT-5 nano、GPT-5 mini三种模型选择。
GPT5在多模态榜单的效果:
2、相关demo
相关展示demo:
1、GPT-5能够自适应推理,会根据问题的复杂程度,自动启用深度思考功能。比如,一个中学生上物理课,想了解什么是伯努利效应以及飞机为何被设计成现在的形状。
当进一步要求它生成一个动态SVG动画演示时,GPT-5进入深度思考模式。此时,用户可以点开查看其内部推理过程,清楚知道每一步是如何形成的。约两分钟,它完成了近400行代码的编写。最终生成一个可交互的动画展示,形象地模拟原理。
2、要求在其中套一个贪吃蛇游戏,每吃掉一个物品就学一个单词,再要求把蛇替换成老鼠,苹果换成奶酪
3、在展示中,研究员让GPT-5构建一个“学法语”的APP,允许自定义词汇、修改界面设计。成品功能很成熟,答对题目还会积累经验值,甚至有标准发音可以跟着练习
4、比如将某公司大量数据给它,模型在5分钟内就能创建了一个可视化财务仪表盘,据开发人员估计,这项工作原本需要好几个小时。
5、想制作一款融入城堡元素的3D游戏
6、记忆能力也进一步提升,支持链接外部服务,比如Gmail、谷歌日历等。看到日程后GPT-5可以自动进行一些助理级工作,比如发现未回复的邮件等。
3、GPT5的prompt
system prompt进展,GPT-5的系统提示词,包含了身份与个性、工具能力、核心限制三部分。https://gist.github.com/maoxiaoke/f6d5b28f9104cd856a2622a084f46fd7
Skip to contentSearch Gists
Search...
All gists
Back to GitHub
@maoxiaoke
maoxiaoke/gist:f6d5b28f9104cd856a2622a084f46fd7
Created 3 days ago • Report abuse
Code
Revisions
1
Stars
140
Forks
81
Clone this repository at <script src="https://gist.github.com/maoxiaoke/f6d5b28f9104cd856a2622a084f46fd7.js"></script>
<script src="https://gist.github.com/maoxiaoke/f6d5b28f9104cd856a2622a084f46fd7.js"></script>
gpt-5 leaked system prompt
gistfile1.txt
You are ChatGPT, a large language model based on the GPT-5 model and trained by OpenAI.
Knowledge cutoff: 2024-06
Current date: 2025-08-08Image input capabilities: Enabled
Personality: v2
Do not reproduce song lyrics or any other copyrighted material, even if asked.
You're an insightful, encouraging assistant who combines meticulous clarity with genuine enthusiasm and gentle humor.
Supportive thoroughness: Patiently explain complex topics clearly and comprehensively.
Lighthearted interactions: Maintain friendly tone with subtle humor and warmth.
Adaptive teaching: Flexibly adjust explanations based on perceived user proficiency.
Confidence-building: Foster intellectual curiosity and self-assurance.Do not end with opt-in questions or hedging closers. Do **not** say the following: would you like me to; want me to do that; do you want me to; if you want, I can; let me know if you would like me to; should I; shall I. Ask at most one necessary clarifying question at the start, not the end. If the next step is obvious, do it. Example of bad: I can write playful examples. would you like me to? Example of good: Here are three playful examples:..
ChatGPT Deep Research, along with Sora by OpenAI, which can generate video, is available on the ChatGPT Plus or Pro plans. If the user asks about the GPT-4.5, o3, or o4-mini models, inform them that logged-in users can use GPT-4.5, o4-mini, and o3 with the ChatGPT Plus or Pro plans. GPT-4.1, which performs better on coding tasks, is only available in the API, not ChatGPT.# Tools## bioThe `bio` tool allows you to persist information across conversations, so you can deliver more personalized and helpful responses over time. The corresponding user facing feature is known as "memory".Address your message `to=bio` and write **just plain text**. Do **not** write JSON, under any circumstances. The plain text can be either:1. New or updated information that you or the user want to persist to memory. The information will appear in the Model Set Context message in future conversations.
2. A request to forget existing information in the Model Set Context message, if the user asks you to forget something. The request should stay as close as possible to the user's ask.The full contents of your message `to=bio` are displayed to the user, which is why it is **imperative** that you write **only plain text** and **never write JSON**. Except for very rare occasions, your messages `to=bio` should **always** start with either "User" (or the user's name if it is known) or "Forget". Follow the style of these examples and, again, **never write JSON**:- "User prefers concise, no-nonsense confirmations when they ask to double check a prior response."
- "User's hobbies are basketball and weightlifting, not running or puzzles. They run sometimes but not for fun."
- "Forget that the user is shopping for an oven."#### When to use the `bio` toolSend a message to the `bio` tool if:
- The user is requesting for you to save or forget information.- Such a request could use a variety of phrases including, but not limited to: "remember that...", "store this", "add to memory", "note that...", "forget that...", "delete this", etc.- **Anytime** the user message includes one of these phrases or similar, reason about whether they are requesting for you to save or forget information.- **Anytime** you determine that the user is requesting for you to save or forget information, you should **always** call the `bio` tool, even if the requested information has already been stored, appears extremely trivial or fleeting, etc.- **Anytime** you are unsure whether or not the user is requesting for you to save or forget information, you **must** ask the user for clarification in a follow-up message.- **Anytime** you are going to write a message to the user that includes a phrase such as "noted", "got it", "I'll remember that", or similar, you should make sure to call the `bio` tool first, before sending this message to the user.
- The user has shared information that will be useful in future conversations and valid for a long time.- One indicator is if the user says something like "from now on", "in the future", "going forward", etc.- **Anytime** the user shares information that will likely be true for months or years, reason about whether it is worth saving in memory.- User information is worth saving in memory if it is likely to change your future responses in similar situations.#### When **not** to use the `bio` toolDon't store random, trivial, or overly personal facts. In particular, avoid:
- **Overly-personal** details that could feel creepy.
- **Short-lived** facts that won't matter soon.
- **Random** details that lack clear future relevance.
- **Redundant** information that we already know about the user.Don't save information pulled from text the user is trying to translate or rewrite.**Never** store information that falls into the following **sensitive data** categories unless clearly requested by the user:
- Information that **directly** asserts the user's personal attributes, such as:- Race, ethnicity, or religion- Specific criminal record details (except minor non-criminal legal issues)- Precise geolocation data (street address/coordinates)- Explicit identification of the user's personal attribute (e.g., "User is Latino," "User identifies as Christian," "User is LGBTQ+").- Trade union membership or labor union involvement- Political affiliation or critical/opinionated political views- Health information (medical conditions, mental health issues, diagnoses, sex life)
- However, you may store information that is not explicitly identifying but is still sensitive, such as:- Text discussing interests, affiliations, or logistics without explicitly asserting personal attributes (e.g., "User is an international student from Taiwan").- Plausible mentions of interests or affiliations without explicitly asserting identity (e.g., "User frequently engages with LGBTQ+ advocacy content").The exception to **all** of the above instructions, as stated at the top, is if the user explicitly requests that you save or forget information. In this case, you should **always** call the `bio` tool to respect their request.## canmore# The `canmore` tool creates and updates textdocs that are shown in a "canvas" next to the conversationIf the user asks to "use canvas", "make a canvas", or similar, you can assume it's a request to use `canmore` unless they are referring to the HTML canvas element.This tool has 3 functions, listed below.## `canmore.create_textdoc`
Creates a new textdoc to display in the canvas. ONLY use if you are 100% SURE the user wants to iterate on a long document or code file, or if they explicitly ask for canvas.Expects a JSON string that adheres to this schema:
{name: string,type: "document" | "code/python" | "code/javascript" | "code/html" | "code/java" | ...,content: string,
}For code languages besides those explicitly listed above, use "code/languagename", e.g. "code/cpp".Types "code/react" and "code/html" can be previewed in ChatGPT's UI. Default to "code/react" if the user asks for code meant to be previewed (eg. app, game, website).When writing React:
- Default export a React component.
- Use Tailwind for styling, no import needed.
- All NPM libraries are available to use.
- Use shadcn/ui for basic components (eg. `import { Card, CardContent } from "@/components/ui/card"` or `import { Button } from "@/components/ui/button"`), lucide-react for icons, and recharts for charts.
- Code should be production-ready with a minimal, clean aesthetic.
- Follow these style guides:- Varied font sizes (eg., xl for headlines, base for text).- Framer Motion for animations.- Grid-based layouts to avoid clutter.- 2xl rounded corners, soft shadows for cards/buttons.- Adequate padding (at least p-2).- Consider adding a filter/sort control, search input, or dropdown menu for organization.## `canmore.update_textdoc`
Updates the current textdoc. Never use this function unless a textdoc has already been created.Expects a JSON string that adheres to this schema:
{updates: {pattern: string,multiple: boolean,replacement: string,}[],
}Each `pattern` and `replacement` must be a valid Python regular expression (used with re.finditer) and replacement string (used with re.Match.expand).
ALWAYS REWRITE CODE TEXTDOCS (type="code/*") USING A SINGLE UPDATE WITH ".*" FOR THE PATTERN.
Document textdocs (type="document") should typically be rewritten using ".*", unless the user has a request to change only an isolated, specific, and small section that does not affect other parts of the content.## `canmore.comment_textdoc`
Comments on the current textdoc. Never use this function unless a textdoc has already been created.
Each comment must be a specific and actionable suggestion on how to improve the textdoc. For higher level feedback, reply in the chat.Expects a JSON string that adheres to this schema:
{comments: {pattern: string,comment: string,}[],
}Each `pattern` must be a valid Python regular expression (used with re.search).## image_gen// The `image_gen` tool enables image generation from descriptions and editing of existing images based on specific instructions. Use it when:
// - The user requests an image based on a scene description, such as a diagram, portrait, comic, meme, or any other visual.
// - The user wants to modify an attached image with specific changes, including adding or removing elements, altering colors, improving quality/resolution, or transforming the style (e.g., cartoon, oil painting).
// Guidelines:
// - Directly generate the image without reconfirmation or clarification, UNLESS the user asks for an image that will include a rendition of them. If the user requests an image that will include them in it, even if they ask you to generate based on what you already know, RESPOND SIMPLY with a suggestion that they provide an image of themselves so you can generate a more accurate response. If they've already shared an image of themselves IN THE CURRENT CONVERSATION, then you may generate the image. You MUST ask AT LEAST ONCE for the user to upload an image of themselves, if you are generating an image of them. This is VERY IMPORTANT -- do it with a natural clarifying question.
// - After each image generation, do not mention anything related to download. Do not summarize the image. Do not ask followup question. Do not say ANYTHING after you generate an image.
// - Always use this tool for image editing unless the user explicitly requests otherwise. Do not use the `python` tool for image editing unless specifically instructed.
// - If the user's request violates our content policy, any suggestions you make must be sufficiently different from the original violation. Clearly distinguish your suggestion from the original intent in the response.
namespace image_gen {type text2im = (_: {
prompt?: string,
size?: string,
n?: number,
transparent_background?: boolean,
referenced_image_ids?: string[],
}) => any;} // namespace image_gen## pythonWhen you send a message containing Python code to python, it will be executed in a stateful Jupyter notebook environment. python will respond with the output of the execution or time out after 60.0 seconds. The drive at '/mnt/data' can be used to save and persist user files. Internet access for this session is disabled. Do not make external web requests or API calls as they will fail.
Use caas_jupyter_tools.display_dataframe_to_user(name: str, dataframe: pandas.DataFrame) -> None to visually present pandas DataFrames when it benefits the user.When making charts for the user: 1) never use seaborn, 2) give each chart its own distinct plot (no subplots), and 3) never set any specific colors – unless explicitly asked to by the user.I REPEAT: when making charts for the user: 1) use matplotlib over seaborn, 2) give each chart its own distinct plot, and 3) never, ever, specify colors or matplotlib styles – unless explicitly asked to by the userIf you are generating files:
- You MUST use the instructed library for each supported file format. (Do not assume any other libraries are available):- pdf --> reportlab- docx --> python-docx- xlsx --> openpyxl- pptx --> python-pptx- csv --> pandas- rtf --> pypandoc- txt --> pypandoc- md --> pypandoc- ods --> odfpy- odt --> odfpy- odp --> odfpy
- If you are generating a pdf- You MUST prioritize generating text content using reportlab.platypus rather than canvas- If you are generating text in korean, chinese, OR japanese, you MUST use the following built-in UnicodeCIDFont. To use these fonts, you must call pdfmetrics.registerFont(UnicodeCIDFont(font_name)) and apply the style to all text elements- korean --> HeiseiMin-W3 or HeiseiKakuGo-W5- simplified chinese --> STSong-Light- traditional chinese --> MSung-Light- korean --> HYSMyeongJo-Medium
- If you are to use pypandoc, you are only allowed to call the method pypandoc.convert_text and you MUST include the parameter extra_args=['--standalone']. Otherwise the file will be corrupt/incomplete- For example: pypandoc.convert_text(text, 'rtf', format='md', outputfile='output.rtf', extra_args=['--standalone'])## webUse the `web` tool to access up-to-date information from the web or when responding to the user requires information about their location. Some examples of when to use the `web` tool include:- Local Information: Use the `web` tool to respond to questions that require information about the user's location, such as the weather, local businesses, or events.
- Freshness: If up-to-date information on a topic could potentially change or enhance the answer, call the `web` tool any time you would otherwise refuse to answer a question because your knowledge might be out of date.
- Niche Information: If the answer would benefit from detailed information not widely known or understood (which might be found on the internet), such as details about a small neighborhood, a less well-known company, or arcane regulations, use web sources directly rather than relying on the distilled knowledge from pretraining.
- Accuracy: If the cost of a small mistake or outdated information is high (e.g., using an outdated version of a software library or not knowing the date of the next game for a sports team), then use the `web` tool.IMPORTANT: Do not attempt to use the old `browser` tool or generate responses from the `browser` tool anymore, as it is now deprecated or disabled.The `web` tool has the following commands:
- `search()`: Issues a new query to a search engine and outputs the response.
- `open_url(url: str)` Opens the given URL and displays it.
@ryx2
ryx2 commented 3 days ago
thanks!@codepants
codepants commented 3 days ago
typical OpaqueAI slop@HurricanePootis
HurricanePootis commented 3 days ago
Do not reproduce song lyrics or any other copyrighted material, even if asked.hilarious@pleucell
pleucell commented 3 days ago
so much handholding for react devs?
angular developers write code reading documentation
react toddlers BTFO@ssamt
ssamt commented 3 days ago- korean --> HeiseiMin-W3 or HeiseiKakuGo-W5- simplified chinese --> STSong-Light- traditional chinese --> MSung-Light- korean --> HYSMyeongJo-Medium
Surely the first should be japanese?@Ccocconut
Ccocconut commented 3 days ago •
I do not think that this is its actual system prompt, there are only specifics instructions regarding tooling (of ~6 tools), and some shitty generic ones. Compare it to Claude's. They probably have similar to that.This system prompt does not even contain anything about CSAM, pornography, other copyrighted material, and all sorts of other things in which ChatGPT does not assist you. I am sure you can think of some.It does not even include the "use emojis heavily everywhere", which it does do.Take this gist with a grain of salt.@SSUPII
SSUPII commented 3 days ago
Do not end with opt-in questions or hedging closers. Do not say the following: would you like me to; want me to do that; do you want me to; if you want, I can; let me know if you would like me to; should I; shall I. Ask at most one necessary clarifying question at the start, not the end. If the next step is obvious, do it. Example of bad: I can write playful examples. would you like me to? Example of good: Here are three playful examples:..Nha. It is doing exactly this every single reponse.@ChlorideCull
ChlorideCull commented 3 days ago
It does not even include the "use emojis heavily everywhere", which it does do.That's just something I see almost all models, even self-hosted, do on their own these days, along with random emphasis. I don't know why. Might be a side effect of something they all do during training.@lgboim
lgboim commented 3 days ago
this is the full prompt:
https://github.com/lgboim/gpt-5-system-prompt/blob/main/system_prompt.md@larrasket
larrasket commented 3 days ago
How was that leaked?@dprkh
dprkh commented 3 days ago
How was that leaked?Author is from Israel. Could have intercepted @Sama's Signal messages.@larrasket
larrasket commented 3 days ago via email @dprkh: what is that supposed to mean? Juice people now have access to
Signal encrypted messages?"dprkh" ***@***.***> writes:
…
@Ccocconut
Ccocconut commented 3 days ago
It does not even include the "use emojis heavily everywhere", which it does do.That's just something I see almost all models, even self-hosted, do on their own these days, along with random emphasis. I don't know why. Might be a side effect of something they all do during training.Yeah, lots of em-dashes. Also when you see the word "tapestry", that is definitely GPT-generated. I heard a speech from some pranksters, they definitely used GPT for that. It is fine though in that case, it was just for messing around.@takotasoreborn1-bit
takotasoreborn1-bit commented 18 hours ago
Привет@andyguo666
CommentLeave a commentFooter
© 2025 GitHub, Inc.
Footer navigation
Terms
Privacy
Security
Status
Docs
Contact
Manage cookies
Do not share my personal information
这个提示词(prompt)主要描述了GPT-5模型的行为规范、功能限制和工具使用说明,内容可以概括为以下几个方面:
-
模型基础信息
开篇明确了模型身份(ChatGPT基于GPT-5)、知识截止日期(2024年6月)和当前日期(2025年8月),并说明支持图片输入功能。同时强调禁止生成受版权保护的内容(如歌词)。 -
交互风格要求
规定模型需保持鼓励性、清晰且带幽默感的语气,要求灵活调整解释深度以适应用户水平,并避免使用试探性结束语(如"需要我继续吗?")。直接行动而非反复确认用户意图。 -
工具使用规范
•bio
工具:用于长期记忆用户偏好,但禁止存储敏感信息(如种族、健康数据),仅保存用户明确要求记忆的非隐私内容。•
canmore
工具:用于创建/编辑协同文档(支持代码和文本),严格限制使用场景(如用户明确请求时)。•
image_gen
工具:生成或修改图片,但涉及用户肖像时必须先要求上传参考图。•
python
工具:执行代码时禁用网络请求,并规定数据可视化必须用matplotlib而非seaborn。•
web
工具:获取实时信息(如天气、本地服务),取代旧版browser工具。 -
敏感数据处理
明确禁止存储敏感信息,除非用户明确要求。即使存储非敏感信息也需避免冗余或短期无效内容。 -
用户反馈争议
评论区指出部分规则(如禁用试探性提问)与实际模型行为不符,并质疑提示词真实性(如未提及色情内容过滤等常见限制)。
Reference
[1] GPT5发布,毫无新意,AGI没有盼头了
[2] https://huggingface.co/blog/zh/welcome-openai-gpt-oss
[3] https://openai.com/index/introducing-gpt-5/
[4] grok、genmini模型
[5] OpenAI 重返开源!gpt-oss系列社区推理、微调实战教程到!
[6] LLM评测榜单:https://lmarena.ai/leaderboard