【学习笔记】Langchain基础(二)

前文:【学习笔记】Langchain基础


文章目录

  • 8 [LangGraph] 实现 Building Effective Agents,各种 workflows 及 Agent
    • Augmented LLM
    • Prompt Chaining
    • Parallelization
    • Routing
    • Orchestrator-Worker (协调器-工作器)
    • Evaluator-optimizer (Actor-Critic)
    • Agent


8 [LangGraph] 实现 Building Effective Agents,各种 workflows 及 Agent

  • video: https://www.bilibili.com/video/BV1ZsM2zcEFa

  • code: https://github.com/chunhuizhang/llm_aigc/blob/main/tutorials/agents/langchain/advanced/building_effective_agents.ipynb

  • https://www.anthropic.com/engineering/building-effective-agents

    • https://mirror-feeling-d80.notion.site/Workflow-And-Agents-17e808527b1780d792a0d934ce62bee6
      • https://langchain-ai.github.io/langgraph/tutorials/workflows/
      • https://www.youtube.com/watch?v=aHCDrAbH_go
  • AlphaEvolve: coding agent

    • llm 作为核心算子实现遗传算法;
    • 适用于可以自动评估的环境;& 大量脚手架类似的工作;
  • https://github.com/google-gemini/gemini-fullstack-langgraph-quickstart/tree/main

    • 基于 langgraph 编排工作流实现 deepresearch
      • 不同的节点用不同的 gemini models
      • pro:deep,flash:width
  • sota llms (gemini 2.5 pro, openai o3) + 精心设计的 workflow(脚手架,scaffolding)能解决非常多非常复杂的问题;

    • 基础模型实在是越来越powerful了

在这里插入图片描述

  • workflows
    • Create a scaffolding of predefined code paths around llm calls
    • LLMs directs control flow through predefined code paths
  • Agen: remove this scaffolding (LLM directs its own actions, responds to feedback)
from langchain_openai import ChatOpenAI
from dotenv import load_dotenv
assert load_dotenv()
llm = ChatOpenAI(model='gpt-4o-mini')

Augmented LLM

  • Augment the LLM with schema for structured output
  • Augment the LLM with tools

在这里插入图片描述

# Schema for structured output => output schema
from pydantic import BaseModel, Fieldclass SearchQuery(BaseModel):search_query: str = Field(None, description="Query that is optimized web search.")justification: str = Field(None, description="Why this query is relevant to the user's request.")# Augment the LLM with schema for structured output
structured_llm = llm.with_structured_output(SearchQuery)# Invoke the augmented LLM
output = structured_llm.invoke("How does Calcium CT score relate to high cholesterol?") # SearchQuery(search_query='Calcium CT score high cholesterol relationship', justification='This query targets the relationship between calcium CT scores and cholesterol levels, which may help in understanding cardiovascular risk assessment.')

执行:structured_llm.invoke("今年高考新闻"),输出SearchQuery(search_query='2023年高考 新闻 相关报道', justification='搜索2023年高考的相关新闻,以获取该年度高考的最新动态、政策变化及新闻事件等信息,符合用户对今年高考的关注。')

# Define a tool
def multiply(a: float, b: float) -> float:return a * b
def sigmoid(a: float) -> float:return 1./(1+np.exp(-a))
# Augment the LLM with tools
llm_with_tools = llm.bind_tools([multiply, sigmoid])
# Invoke the LLM with input that triggers the tool call
msg = llm_with_tools.invoke("What is derivative of sigmoid(5)")
msg.tool_calls
"""
[{'name': 'sigmoid','args': {'a': 5},'id': 'call_PBKMYMxZjU0x8IP9TxMHuE8Y','type': 'tool_call'}]
"""

msg具体如下:

AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_PBKMYMxZjU0x8IP9TxMHuE8Y', 'function': {'arguments': '{"a":5}', 'name': 'sigmoid'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 14, 'prompt_tokens': 194, 'total_tokens': 208, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_34a54ae93c', 'id': 'chatcmpl-BiJpUWnwANEG7bDWV9FR6mXp5vHj1', 'service_tier': 'default', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run--ac1635ef-b4cd-4526-af25-f1ba23e22ed7-0', tool_calls=[{'name': 'sigmoid', 'args': {'a': 5}, 'id': 'call_PBKMYMxZjU0x8IP9TxMHuE8Y', 'type': 'tool_call'}], usage_metadata={'input_tokens': 194, 'output_tokens': 14, 'total_tokens': 208, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}})

在这里插入图片描述


Prompt Chaining

在这里插入图片描述

from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from IPython.display import Image, display# Graph state
class State(TypedDict):topic: strjoke: strimproved_joke: strfinal_joke: str# Nodes
def generate_joke(state: State):"""First LLM call to generate initial joke"""msg = llm.invoke(f"Write a short joke about {state['topic']}")return {"joke": msg.content}def check_punchline(state: State):"""Gate function to check if the joke has a punchline"""# Simple check - does the joke contain "?" or "!"if "?" in state["joke"] or "!" in state["joke"]:return "Pass"return "Fail"def improve_joke(state: State):"""Second LLM call to improve the joke"""# wordplay: 俏皮话/双关语msg = llm.invoke(f"Make this joke funnier by adding wordplay: {state['joke']}")return {"improved_joke": msg.content}def polish_joke(state: State):"""Third LLM call for final polish"""msg = llm.invoke(f"Add a surprising twist to this joke: {state['improved_joke']}")return {"final_joke": msg.content}

然后搭建workflow:

# Build workflow
workflow = StateGraph(State)# Add nodes
workflow.add_node("generate_joke", generate_joke)
workflow.add_node("improve_joke", improve_joke)
workflow.add_node("polish_joke", polish_joke)# Add edges to connect nodes
workflow.add_edge(START, "generate_joke")
workflow.add_conditional_edges("generate_joke", check_punchline, {"Fail": "improve_joke", "Pass": END}
)
workflow.add_edge("improve_joke", "polish_joke")
workflow.add_edge("polish_joke", END)# Compile
chain = workflow.compile()

Image(chain.get_graph().draw_mermaid_png())可视化如下:
在这里插入图片描述

state = chain.invoke({"topic": "cats"})
"""
{'topic': 'cats','joke': 'Why did the cat sit on the computer?\n\nBecause it wanted to keep an eye on the mouse!'}
"""for step in chain.stream({"topic": "dogs"}):print(step)
# {'generate_joke': {'joke': "Why did the dog sit in the shade? \n\nBecause he didn't want to become a hot dog!"}}

Parallelization

LangGraph中可以并发执行一些workflow

  • LLMs can sometimes work simultaneously on a task and have their outputs aggregated programmatically. This workflow, parallelization, manifests in two key variations:
    • Sectioning: Breaking a task into independent subtasks run in parallel.
    • Aggregator(Voting): Running the same task multiple times to get diverse outputs.

在这里插入图片描述

定义图:

# Graph state
class State(TypedDict):topic: strjoke: strstory: strpoem: strcombined_output: str# Nodes
def call_llm_1(state: State):"""First LLM call to generate initial joke"""msg = llm.invoke(f"Write a joke about {state['topic']}")return {"joke": msg.content}def call_llm_2(state: State):"""Second LLM call to generate story"""msg = llm.invoke(f"Write a story about {state['topic']}")return {"story": msg.content}def call_llm_3(state: State):"""Third LLM call to generate poem"""msg = llm.invoke(f"Write a poem about {state['topic']}")return {"poem": msg.content}def aggregator(state: State):"""Combine the joke and story into a single output"""combined = f"Here's a story, joke, and poem about {state['topic']}!\n\n"combined += f"STORY:\n{state['story']}\n\n"combined += f"JOKE:\n{state['joke']}\n\n"combined += f"POEM:\n{state['poem']}"return {"combined_output": combined}

搭建工作流:

# Build workflow
parallel_builder = StateGraph(State)# Add nodes
parallel_builder.add_node("call_llm_1", call_llm_1)
parallel_builder.add_node("call_llm_2", call_llm_2)
parallel_builder.add_node("call_llm_3", call_llm_3)
parallel_builder.add_node("aggregator", aggregator)# Add edges to connect nodes
parallel_builder.add_edge(START, "call_llm_1")
parallel_builder.add_edge(START, "call_llm_2")
parallel_builder.add_edge(START, "call_llm_3")
parallel_builder.add_edge("call_llm_1", "aggregator")
parallel_builder.add_edge("call_llm_2", "aggregator")
parallel_builder.add_edge("call_llm_3", "aggregator")
parallel_builder.add_edge("aggregator", END)
parallel_workflow = parallel_builder.compile()

可视化:display(Image(parallel_workflow.get_graph().draw_mermaid_png()))

在这里插入图片描述


Routing

单个语言不够强的时候,可以根据路由将不同的输入分配给不同的LLM

在这里插入图片描述

  • Routing classifies an input and directs it to a specialized followup task. This workflow allows for separation of concerns, and building more specialized prompts. Without this workflow, optimizing for one kind of input can hurt performance on other inputs.
  • When to use this workflow: Routing works well for complex tasks where there are distinct categories that are better handled separately, and where classification can be handled accurately, either by an LLM or a more traditional classification model/algorithm.
    • distinct models (fast non-reasoning model, powerful reasoning model)
from typing_extensions import Literal
from langchain_core.messages import HumanMessage, SystemMessage# Schema for structured output to use as routing logic
class Route(BaseModel):step: Literal["poem", "story", "joke"] = Field(None, description="The next step in the routing process")
# Augment the LLM with schema for structured output
router = llm.with_structured_output(Route)# State
class State(TypedDict):input: strdecision: stroutput: str

定义节点:

# Nodes
def llm_call_1(state: State):"""Write a story"""result = llm.invoke(state["input"])return {"output": result.content}def llm_call_2(state: State):"""Write a joke"""result = llm.invoke(state["input"])return {"output": result.content}def llm_call_3(state: State):"""Write a poem"""result = llm.invoke(state["input"])return {"output": result.content}

定义路由:

def llm_call_router(state: State):"""Route the input to the appropriate node"""# Run the augmented LLM with structured output to serve as routing logicdecision = router.invoke([SystemMessage(content="Route the input to story, joke, or poem based on the user's request."),HumanMessage(content=state["input"]),])return {"decision": decision.step}# Conditional edge function to route to the appropriate node
def route_decision(state: State):# Return the node name you want to visit nextif state["decision"] == "story":return "llm_call_1"elif state["decision"] == "joke":return "llm_call_2"elif state["decision"] == "poem":return "llm_call_3"

搭建工作流:

# Build workflow
router_builder = StateGraph(State)# Add nodes
router_builder.add_node("llm_call_1", llm_call_1)
router_builder.add_node("llm_call_2", llm_call_2)
router_builder.add_node("llm_call_3", llm_call_3)
router_builder.add_node("llm_call_router", llm_call_router)# Add edges to connect nodes
router_builder.add_edge(START, "llm_call_router")
router_builder.add_conditional_edges("llm_call_router",route_decision,{  # Name returned by route_decision : Name of next node to visit"llm_call_1": "llm_call_1","llm_call_2": "llm_call_2","llm_call_3": "llm_call_3",},
)
router_builder.add_edge("llm_call_1", END)
router_builder.add_edge("llm_call_2", END)
router_builder.add_edge("llm_call_3", END)# Compile workflow
router_workflow = router_builder.compile()

可视化:display(Image(router_workflow.get_graph().draw_mermaid_png()))

在这里插入图片描述

然后可以测试一个案例:

state = router_workflow.invoke({"input": "Write me a joke about cats"})
for step in router_workflow.stream({"input": "Write me a joke about cats"}):print(step)

输出:

{'llm_call_router': {'decision': 'joke'}}
{'llm_call_2': {'output': 'Why was the cat sitting on the computer?\n\nBecause it wanted to keep an eye on the mouse!'}}

再来一个案例:

for step in router_workflow.stream({"input": "Write me a poem about cats"}):print(step)

输出:

{'llm_call_router': {'decision': 'poem'}}
{'llm_call_3': {'output': 'In sunlit corners, shadows play,  \nWhere whispers of the feline sway,  \nWith graceful poise and silent tread,  \nThe world’s a kingdom, theirs to thread.  \n\nA tapestry of fur like night,  \nWith emerald eyes that pierce the light,  \nThey leap as if on dreams they dance,  \nIn elegant arcs, a fleeting glance.  \n\nA soft purr hums, a gentle song,  \nA lullaby where hearts belong,  \nWith velvet paws on wooden floors,  \nThey weave their magic, open doors.  \n\nEach flick of tail, a tale to tell,  \nOf mischief, grace, and worlds that dwell  \nIn boxes, sunbeams, every fold,  \nAdventures vast, and secrets bold.  \n\nThey curl like commas, snug and warm,  \nIn every lap, their soft charm forms,  \nA soothing presence, quiet, wise,  \nWith knowing hearts and ageless sighs.  \n\nOh, creatures of the night and day,  \nIn your soft wisdom, we find our way,  \nWith tender gazes, you understand,  \nThe joys and sorrows of this land.  \n\nSo here’s to cats, our quaintest friends,  \nWith every whisker, affection lends,  \nIn their elusive, gentle grace,  \nWe find a home, a purr-fect place.'}}

Orchestrator-Worker (协调器-工作器)

在这里插入图片描述

  • an orchestrator breaks down a task and delegates each sub-task to workers.
    • In the orchestrator-workers workflow, a central LLM dynamically breaks down tasks, delegates them to worker LLMs, and synthesizes their results.
    • When to use this workflow: This workflow is well-suited for complex tasks where you can’t predict the subtasks needed (in coding, for example, the number of files that need to be changed and the nature of the change in each file likely depend on the task). Whereas it’s topographically similar, the key difference from parallelization is its flexibility—subtasks aren’t pre-defined, but determined by the orchestrator based on the specific input.
from typing import Annotated, List
import operator# Schema for structured output to use in planning
class Section(BaseModel):name: str = Field(description="Name for this section of the report.",)description: str = Field(description="Brief overview of the main topics and concepts to be covered in this section.",)class Sections(BaseModel):sections: List[Section] = Field(description="Sections of the report.",)
# Augment the LLM with schema for structured output
planner = llm.with_structured_output(Sections)
from langgraph.constants import Send# Graph state
class State(TypedDict):topic: str  # Report topicsections: list[Section]  # List of report sectionscompleted_sections: Annotated[list, operator.add]  # All workers write to this key in parallelfinal_report: str  # Final report# Worker state
class WorkerState(TypedDict):section: Sectioncompleted_sections: Annotated[list, operator.add]
# Nodes
def orchestrator(state: State):"""Orchestrator that generates a plan for the report"""# Generate queriesreport_sections = planner.invoke([SystemMessage(content="Generate a plan for the report."),HumanMessage(content=f"Here is the report topic: {state['topic']}"),])return {"sections": report_sections.sections}
def llm_call(state: WorkerState):"""Worker writes a section of the report"""# Generate sectionsection = llm.invoke([SystemMessage(content="Write a report section following the provided name and description. Include no preamble for each section. Use markdown formatting."),HumanMessage(content=f"Here is the section name: {state['section'].name} and description: {state['section'].description}"),])# Write the updated section to completed sectionsreturn {"completed_sections": [section.content]}
def synthesizer(state: State):"""Synthesize full report from sections"""# List of completed sectionscompleted_sections = state["completed_sections"]# Format completed section to str to use as context for final sectionscompleted_report_sections = "\n\n---\n\n".join(completed_sections)return {"final_report": completed_report_sections}
# Conditional edge function to create llm_call workers that each write a section of the report
def assign_workers(state: State):"""Assign a worker to each section in the plan"""# Kick off section writing in parallel via Send() APIreturn [Send("llm_call", {"section": s}) for s in state["sections"]]

最后搭建工作流:

# Build workflow
orchestrator_worker_builder = StateGraph(State)# Add the nodes
orchestrator_worker_builder.add_node("orchestrator", orchestrator)
orchestrator_worker_builder.add_node("llm_call", llm_call)
orchestrator_worker_builder.add_node("synthesizer", synthesizer)# Add edges to connect nodes
orchestrator_worker_builder.add_edge(START, "orchestrator")
orchestrator_worker_builder.add_conditional_edges("orchestrator", assign_workers, ["llm_call"]
)
orchestrator_worker_builder.add_edge("llm_call", "synthesizer")
orchestrator_worker_builder.add_edge("synthesizer", END)# Compile the workflow
orchestrator_worker = orchestrator_worker_builder.compile()# Show the workflow
display(Image(orchestrator_worker.get_graph().draw_mermaid_png()))

在这里插入图片描述

同样给两个例子:

state = orchestrator_worker.invoke({"topic": "Create a report on LLM scaling laws"})from IPython.display import Markdown
# Markdown(state["final_report"])# for step in orchestrator_worker.stream({"topic": "Create a report on LLM scaling laws"}):
#     print(step)

Evaluator-optimizer (Actor-Critic)

在这里插入图片描述

  • In the evaluator-optimizer workflow, one LLM call generates a response while another provides evaluation and feedback in a loop.
  • When to use this workflow: This workflow is particularly effective when we have clear evaluation criteria, and when iterative refinement provides measurable value. The two signs of good fit are, first, that LLM responses can be demonstrably improved when a human articulates their feedback; and second, that the LLM can provide such feedback. This is analogous to the iterative writing process a human writer might go through when producing a polished document.

定义图:

# Graph state
class State(TypedDict):joke: strtopic: strfeedback: strfunny_or_not: str# Schema for structured output to use in evaluation
class Feedback(BaseModel):grade: Literal["funny", "not funny"] = Field(description="Decide if the joke is funny or not.",)feedback: str = Field(description="If the joke is not funny, provide feedback on how to improve it.",)# Augment the LLM with schema for structured output
evaluator = llm.with_structured_output(Feedback)def llm_call_generator(state: State):"""LLM generates a joke"""if state.get("feedback"):msg = llm.invoke(f"Write a joke about {state['topic']} but take into account the feedback: {state['feedback']}")else:msg = llm.invoke(f"Write a joke about {state['topic']}")return {"joke": msg.content}def llm_call_evaluator(state: State):"""LLM evaluates the joke"""grade = evaluator.invoke(f"Grade the joke {state['joke']}")return {"funny_or_not": grade.grade, "feedback": grade.feedback}# Conditional edge function to route back to joke generator or end based upon feedback from the evaluator
def route_joke(state: State):"""Route back to joke generator or end based upon feedback from the evaluator"""if state["funny_or_not"] == "funny":return "Accepted"elif state["funny_or_not"] == "not funny":return "Rejected + Feedback"

搭建工作流:

# Build workflow
optimizer_builder = StateGraph(State)# Add the nodes
optimizer_builder.add_node("llm_call_generator", llm_call_generator)
optimizer_builder.add_node("llm_call_evaluator", llm_call_evaluator)# Add edges to connect nodes
optimizer_builder.add_edge(START, "llm_call_generator")
optimizer_builder.add_edge("llm_call_generator", "llm_call_evaluator")
optimizer_builder.add_conditional_edges("llm_call_evaluator",route_joke,{  # Name returned by route_joke : Name of next node to visit"Accepted": END,"Rejected + Feedback": "llm_call_generator",},
)# Compile the workflow
optimizer_workflow = optimizer_builder.compile()display(Image(optimizer_workflow.get_graph().draw_mermaid_png()))

在这里插入图片描述
一个例子:

for step in optimizer_workflow.stream({"topic": "Cats"}):print(step)
# {'llm_call_generator': {'joke': 'Why was the cat sitting on the computer?\n\nBecause it wanted to keep an eye on the mouse!'}}
# {'llm_call_evaluator': {'funny_or_not': 'funny', 'feedback': ''}}

Agent

  • Environment 接受 Action 返回 feedback

在这里插入图片描述

在这里插入图片描述

import numpy as np
from langchain_core.tools import tool# Define tools
@tool
def multiply(a: float, b: float) -> float:"""Multiply a and b.Args:a: first floatb: second float"""return a * b@tool
def add(a: float, b: float) -> float:"""Adds a and b.Args:a: first floatb: second float"""return a + b@tool
def subtract(a: float, b: float) -> float:"""subtract a from b.Args:a: first floatb: second float"""return a - b@tool
def divide(a: float, b: float) -> float:"""Divide a and b.Args:a: first floatb: second float"""return a / b@tool
def sigmoid(a: float) -> float:"""sigmoid(a)Args: a: first float"""return 1./(1+np.exp(-a))# Augment the LLM with tools
tools = [add, subtract, multiply, divide, sigmoid]
tools_by_name = {tool.name: tool for tool in tools}
llm_with_tools = llm.bind_tools(tools)
from langgraph.graph import MessagesState
from langchain_core.messages import SystemMessage, HumanMessage, ToolMessage# Nodes
def llm_call(state: MessagesState):"""LLM decides whether to call a tool or not"""return {"messages": [llm_with_tools.invoke([SystemMessage(content="You are a helpful assistant tasked with performing arithmetic on a set of inputs.")]+ state["messages"])]}def tool_node(state: dict):"""Performs the tool call"""result = []for tool_call in state["messages"][-1].tool_calls:tool = tools_by_name[tool_call["name"]]observation = tool.invoke(tool_call["args"])result.append(ToolMessage(content=observation, tool_call_id=tool_call["id"]))return {"messages": result}# Conditional edge function to route to the tool node or end based upon whether the LLM made a tool call
def should_continue(state: MessagesState) -> Literal["environment", END]:"""Decide if we should continue the loop or stop based upon whether the LLM made a tool call"""messages = state["messages"]last_message = messages[-1]# If the LLM makes a tool call, then perform an actionif last_message.tool_calls:return "Action"# Otherwise, we stop (reply to the user)return END# Build workflow
agent_builder = StateGraph(MessagesState)# Add nodes
agent_builder.add_node("llm_call", llm_call)
agent_builder.add_node("environment", tool_node)# Add edges to connect nodes
agent_builder.add_edge(START, "llm_call")
agent_builder.add_conditional_edges("llm_call",should_continue,{# Name returned by should_continue : Name of next node to visit"Action": "environment",END: END,},
)
agent_builder.add_edge("environment", "llm_call")# Compile the agent
agent = agent_builder.compile()# Show the agent
display(Image(agent.get_graph(xray=True).draw_mermaid_png()))

在这里插入图片描述

看一个例子:

# Invoke
messages = [HumanMessage(content="calculate derivative of sigmoid(5)")]
messages = agent.invoke({"messages": messages})
for m in messages["messages"]:m.pretty_print()

具体的输出结果:

================================ Human Message =================================calculate derivative of sigmoid(5)
================================== Ai Message ==================================
Tool Calls:sigmoid (call_yhe0L1h0iirYx86Oc4tGbHHm)Call ID: call_yhe0L1h0iirYx86Oc4tGbHHmArgs:a: 5
================================= Tool Message =================================0.9933071490757153
================================== Ai Message ==================================
Tool Calls:multiply (call_eepwJz1ggN5uMOU3hCNOir1V)Call ID: call_eepwJz1ggN5uMOU3hCNOir1VArgs:a: 0.9933071490757153b: 0.006692850924284857
================================= Tool Message =================================0.006648056670790157
================================== Ai Message ==================================The derivative of the sigmoid function at \( \text{sigmoid}(5) \) is approximately \( 0.00665 \).

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.pswp.cn/diannao/87110.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Java大模型开发入门 (9/15):连接外部世界(中) - 向量嵌入与向量数据库

前言 在上一篇文章中,我们成功地将一篇长文档加载并分割成了一系列小的文本片段(TextSegment)。我们现在有了一堆“知识碎片”,但面临一个新问题:计算机如何理解这些碎片的内容,并找出与用户问题最相关的片…

Windows下MySQL安装全流程图文教程及客户端使用指南(付整合安装包)

本教程是基于5.7版本安装,5.7和8.0的安装过程大差不差 安装包「windows上mysql中安装包资源」 链接:https://pan.quark.cn/s/de275899936d 一、安装前的准备 1.1 获取 MySQL 安装程序 官网 前往 MySQL 官方下载页面,下载适用于 Windows 系…

笔记 软件工程复习

第一章 软件工程学概述 1.1 软件危机(Software Crisis) 概念 定义:软件危机指在计算机软件开发与维护过程中遇到的一系列严重问题,源于1960年代软件复杂度激增与传统开发方法失效的矛盾。 本质:软件规模扩大 → 开…

GaussDB创建数据库存储

示例一: 下面是一个简单的GaussDB存储过程示例: –创建一个存储过程。 CREATE OR REPLACE PROCEDURE prc_add (param1 IN INTEGER,param2 IN OUT INTEGER ) AS BEGINparam2: param1 param2;dbe_output.print_line(result is: ||to_char(param…

基于51单片机的校园打铃及灯控制系统

目录 具体实现功能 设计介绍 资料内容 全部内容 资料获取 具体实现功能 具体功能: (1)实时显示当前时间(年月日时分秒星期),LED模式指示灯亮。 (2)按下“打铃”和“打铃-”按键…

PHP+mysql雪里开轻量级报修系统 V1.0Beta

# PHP雪里开轻量级报修系统 V1.0Beta ## 简介 这是一个基于PHP7和MySQL5.6的简易报修系统,适用于学校、企业等机构的设备报修管理。 系统支持学生提交报修、后勤处理报修以及系统管理员管理用户和报修记录。 初代版本V1.0,尚未实际业务验证,…

XCTF-misc-base64÷4

拿到一串字符串 666C61677B45333342374644384133423834314341393639394544444241323442363041417D转换为字符串得到flag

Mini DeepSeek-v3训练脚本学习

Mini DeepSeek-v3 训练脚本详细技术说明(脚本在文章最后) 📋 概述 这是一个实现了Mini DeepSeek-v3大语言模型的训练脚本,集成了多项先进的深度学习技术。该脚本支持自动GPU选择和分布式训练,适合在多GPU环境下训练Transformer模型。 &…

FPGA 的硬件结构

FPGA 的基本结构分为5 部分:可编程逻辑块(CLB)、输入/输出块(IOB)、逻辑块之间的布线资源、内嵌RAM 和内嵌的功能单元。 (1)可编程逻辑块(CLB) 一个基本的可编程逻辑块由…

算法专题八: 链表

1.两数相加 题目链接:2. 两数相加 - 力扣(LeetCode) /*** Definition for singly-linked list.* public class ListNode {* int val;* ListNode next;* ListNode() {}* ListNode(int val) { this.val val; }* ListNode…

5G+边缘计算推动下的商品详情API低延迟高效率新方案

在电商行业,商品详情API的性能直接关系到用户体验与平台竞争力。传统云计算模式在处理高并发请求时,常面临网络延迟高、带宽成本大等问题。而5G与边缘计算的结合,为商品详情API的低延迟高效率提供了新方案。本文将深入探讨这一新方案&#xf…

【Python教程】CentOS系统下Miniconda3安装与Python项目后台运行全攻略

一、引言 为了在CentOS系统上高效地开发和运行Python项目,我们常常需要借助Miniconda3来管理Python环境。本文将详细介绍如何在CentOS系统上安装Miniconda3,并将Python项目部署到后台运行。 二、Miniconda3和CentOS系统介绍 Miniconda3介绍 Minicond…

【读点论文】A Survey on Open-Set Image Recognition

A Survey on Open-Set Image Recognition Abstract 开集图像识别(Open-set image recognition,OSR)旨在对测试集中已知类别的样本进行分类,并识别未知类别的样本,在许多实际应用中支持鲁棒的分类器,如自动驾驶、医疗诊断、安全监…

使用DuckDB查询DeepSeek历史对话

DeepSeek网页版在左下角个人信息/系统设置的账号管理页签中有个“导出所有历史对话”功能,点击“导出”,片刻就能生成一个deepseek_data-2025-06-14.zip的文件,里面有2个json文件,直接用文本编辑器查看不太方便。 而用DuckDB查询却…

多线程下 到底是事务内部开启锁 还是先加锁再开启事务?

前言 不知大家是否有观察到一个最常见的错误: 先开启事务,然后针对资源加锁,操作资源,然后释放锁,最后提交事务 你是否发现了在这样的场景下会出现并发安全的问题? (提示:一个线程A…

Javascript解耦,以及Javascript学习网站推荐

一、学习网站推荐 解构 - JavaScript | MDN 界面如下,既有知识点,也有在线编译器执行代码。初学者可以看看 二、Javascript什么是解构 解构语法是一种 Javascript 语法。可以将数组中的值或对象的属性取出,赋值给其他变量。它可以在接收数…

Java大模型开发入门 (11/15):让AI自主行动 - 初探LangChain4j中的智能体(Agents)

前言 在过去的十篇文章里,我们已经打造出了一个相当强大的AI应用。它有记忆,能进行多轮对话;它有知识,能通过RAG回答关于我们私有文档的问题。它就像一个博学的“学者”,你可以向它请教任何在其知识范围内的问题。 但…

Qt KDReports详解与使用

Qt KDReports详解与使用 一、KD Reports 简介二、安装与配置三、核心功能与使用1、创建基础报表2、添加表格数据3、导出为 PDF4、XML报表定义 四、高级功能1、动态数据绑定2、自定义图表3、模板化设计4、页眉页脚设置 五、常见问题六、总结七、实际应用示例:发票生成…

Spring Cloud 原生中间件

📝 代码记录 Consul(服务注册与发现 分布式配置管理) 拥有服务治理功能,实现微服务之间的动态注册与发现 ❌不在使用Eureka:1. 停更进维 2. 注册中心独立且和微服务功能解耦 Consul官网 Spring官方介绍 三个注册中…

CMake实践: 以开源库QSimpleUpdater为例,详细讲解编译、查找依赖等全过程

目录 1.环境和工具 2.CMake编译 3.查找依赖文件 3.1.windeployqt 3.2.dumpbin 4.总结 相关链接 QSimpleUpdater:解锁 Qt 应用自动更新的全新姿势-CSDN博客 1.环境和工具 windows 11, x64 Qt5.12.12或Qt5.15.2 CMake 4.0.2 干净的windows 7,最好是…