开源 python 应用 开发(五)python opencv之目标检测

 最近有个项目需要做视觉自动化处理的工具,最后选用的软件为python,刚好这个机会进行系统学习。短时间学习,需要快速开发,所以记录要点步骤,防止忘记。

 链接:

开源 python 应用 开发(一)python、pip、pyAutogui、python opencv安装-CSDN博客

开源 python 应用 开发(二)基于pyautogui、open cv 视觉识别的工具自动化-CSDN博客

开源 python 应用 开发(三)python语法介绍-CSDN博客

开源 python 应用 开发(四)python文件和系统综合应用-CSDN博客

 推荐链接:

开源 Arkts 鸿蒙应用 开发(一)工程文件分析-CSDN博客

开源 Arkts 鸿蒙应用 开发(二)封装库.har制作和应用-CSDN博客

开源 Arkts 鸿蒙应用 开发(三)Arkts的介绍-CSDN博客

开源 Arkts 鸿蒙应用 开发(四)布局和常用控件-CSDN博客

开源 Arkts 鸿蒙应用 开发(五)控件组成和复杂控件-CSDN博客

 推荐链接:

开源 java android app 开发(一)开发环境的搭建-CSDN博客

开源 java android app 开发(二)工程文件结构-CSDN博客

开源 java android app 开发(三)GUI界面布局和常用组件-CSDN博客

开源 java android app 开发(四)GUI界面重要组件-CSDN博客

开源 java android app 开发(五)文件和数据库存储-CSDN博客

开源 java android app 开发(六)多媒体使用-CSDN博客

开源 java android app 开发(七)通讯之Tcp和Http-CSDN博客

开源 java android app 开发(八)通讯之Mqtt和Ble-CSDN博客

开源 java android app 开发(九)后台之线程和服务-CSDN博客

开源 java android app 开发(十)广播机制-CSDN博客

开源 java android app 开发(十一)调试、发布-CSDN博客

开源 java android app 开发(十二)封库.aar-CSDN博客

推荐链接:

开源C# .net mvc 开发(一)WEB搭建_c#部署web程序-CSDN博客

开源 C# .net mvc 开发(二)网站快速搭建_c#网站开发-CSDN博客

开源 C# .net mvc 开发(三)WEB内外网访问(VS发布、IIS配置网站、花生壳外网穿刺访问)_c# mvc 域名下不可訪問內網,內網下可以訪問域名-CSDN博客

开源 C# .net mvc 开发(四)工程结构、页面提交以及显示_c#工程结构-CSDN博客

开源 C# .net mvc 开发(五)常用代码快速开发_c# mvc开发-CSDN博客

本章节内容如下:实现了使用 YOLOv3 (You Only Look Once version 3) 深度学习模型进行目标检测的功能。识别了香蕉、手机、夜景中的汽车和行人,第一次玩感觉挺有意思的。对图片有些要求自己更换图片,最好选高清大图。

1.  YOLOv3 模型

2.  主要函数

3.  所有代码

4.  效果演示

一、YOLOv3 模型

使用预训练的 YOLOv3 模型(需要 yolov3.cfg 和 yolov3.weights 文件)

基于 COCO 数据集(80 个类别)

这个代码需要3个文件,分别是yolov3.cfg、yolov3.weights、coco.names网上很容易能搜到。

更换对应的 .cfg文件和.weights文件

链接:YOLO: Real-Time Object Detection

yolov3.cfg文件

[net]
# Testing
# batch=1
# subdivisions=1
# Training
batch=64
subdivisions=16
width=608
height=608
channels=3
momentum=0.9
decay=0.0005
angle=0
saturation = 1.5
exposure = 1.5
hue=.1learning_rate=0.001
burn_in=1000
max_batches = 500200
policy=steps
steps=400000,450000
scales=.1,.1[convolutional]
batch_normalize=1
filters=32
size=3
stride=1
pad=1
activation=leaky# Downsample[convolutional]
batch_normalize=1
filters=64
size=3
stride=2
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=32
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=64
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear# Downsample[convolutional]
batch_normalize=1
filters=128
size=3
stride=2
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear# Downsample[convolutional]
batch_normalize=1
filters=256
size=3
stride=2
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear# Downsample[convolutional]
batch_normalize=1
filters=512
size=3
stride=2
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear# Downsample[convolutional]
batch_normalize=1
filters=1024
size=3
stride=2
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky[shortcut]
from=-3
activation=linear######################[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky[convolutional]
size=1
stride=1
pad=1
filters=255
activation=linear[yolo]
mask = 6,7,8
anchors = 10,13,  16,30,  33,23,  30,61,  62,45,  59,119,  116,90,  156,198,  373,326
classes=80
num=9
jitter=.3
ignore_thresh = .7
truth_thresh = 1
random=1[route]
layers = -4[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky[upsample]
stride=2[route]
layers = -1, 61[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=512
activation=leaky[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=512
activation=leaky[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=512
activation=leaky[convolutional]
size=1
stride=1
pad=1
filters=255
activation=linear[yolo]
mask = 3,4,5
anchors = 10,13,  16,30,  33,23,  30,61,  62,45,  59,119,  116,90,  156,198,  373,326
classes=80
num=9
jitter=.3
ignore_thresh = .7
truth_thresh = 1
random=1[route]
layers = -4[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky[upsample]
stride=2[route]
layers = -1, 36[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=256
activation=leaky[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=256
activation=leaky[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=256
activation=leaky[convolutional]
size=1
stride=1
pad=1
filters=255
activation=linear[yolo]
mask = 0,1,2
anchors = 10,13,  16,30,  33,23,  30,61,  62,45,  59,119,  116,90,  156,198,  373,326
classes=80
num=9
jitter=.3
ignore_thresh = .7
truth_thresh = 1
random=1

coco.names文件

person
bicycle
car
motorbike
aeroplane
bus
train
truck
boat
traffic light
fire hydrant
stop sign
parking meter
bench
bird
cat
dog
horse
sheep
cow
elephant
bear
zebra
giraffe
backpack
umbrella
handbag
tie
suitcase
frisbee
skis
snowboard
sports ball
kite
baseball bat
baseball glove
skateboard
surfboard
tennis racket
bottle
wine glass
cup
fork
knife
spoon
bowl
banana
apple
sandwich
orange
broccoli
carrot
hot dog
pizza
donut
cake
chair
sofa
pottedplant
bed
diningtable
toilet
tvmonitor
laptop
mouse
remote
keyboard
cell phone
microwave
oven
toaster
sink
refrigerator
book
clock
vase
scissors
teddy bear
hair drier
toothbrush

二、主要函数

2.1  load_yolo() 函数

功能:加载 YOLOv3 模型和相关文件

def load_yolo():# 获取当前目录current_dir = os.path.dirname(os.path.abspath(__file__))# 构建完整路径cfg_path = os.path.join(current_dir, "yolov3.cfg")weights_path = os.path.join(current_dir, "yolov3.weights")names_path = os.path.join(current_dir, "coco.names")# 检查文件是否存在if not all(os.path.exists(f) for f in [cfg_path, weights_path, names_path]):raise FileNotFoundError("缺少YOLO模型文件,请确保yolov3.cfg、yolov3.weights和coco.names在脚本目录下")# 加载网络net = cv2.dnn.readNet(weights_path, cfg_path)with open(names_path, "r") as f:classes = [line.strip() for line in f.readlines()]layer_names = net.getLayerNames()output_layers = [layer_names[i - 1] for i in net.getUnconnectedOutLayers()]return net, classes, output_layers

2.2  detect_objects(img, net, output_layers) 函数

功能:对输入图像进行目标检测

def detect_objects(img, net, output_layers):# 从图像创建blobblob = cv2.dnn.blobFromImage(img, scalefactor=1/255.0, size=(416, 416), swapRB=True, crop=False)# 设置输入并进行前向传播net.setInput(blob)outputs = net.forward(output_layers)return outputs

2.3  get_box_dimensions(outputs, height, width) 函数

功能:处理网络输出,提取边界框信息

def get_box_dimensions(outputs, height, width):boxes = []confidences = []class_ids = []for output in outputs:for detection in output:scores = detection[5:]class_id = np.argmax(scores)confidence = scores[class_id]if confidence > 0.5:  # 置信度阈值# 计算边界框坐标center_x = int(detection[0] * width)center_y = int(detection[1] * height)w = int(detection[2] * width)h = int(detection[3] * height)# 矩形左上角坐标x = int(center_x - w / 2)y = int(center_y - h / 2)boxes.append([x, y, w, h])confidences.append(float(confidence))class_ids.append(class_id)return boxes, confidences, class_ids

2.4   draw_labels(boxes, confidences, class_ids, classes, img) 函数

功能:在图像上绘制检测结果

def draw_labels(boxes, confidences, class_ids, classes, img):# 应用非极大值抑制indexes = cv2.dnn.NMSBoxes(boxes, confidences, 0.5, 0.4)# 设置颜色colors = np.random.uniform(0, 255, size=(len(classes), 3))if len(indexes) > 0:for i in indexes.flatten():x, y, w, h = boxes[i]label = str(classes[class_ids[i]])confidence = str(round(confidences[i], 2))color = colors[class_ids[i]]# 绘制边界框和标签cv2.rectangle(img, (x, y), (x + w, y + h), color, 2)cv2.putText(img, f"{label} {confidence}", (x, y - 5), cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2)return img

2.5  object_detection(image_path) 函数

功能:主函数,整合整个检测流程

def object_detection(image_path):try:net, classes, output_layers = load_yolo()img = cv2.imread(image_path)if img is None:raise FileNotFoundError(f"无法加载图像: {image_path}")height, width = img.shape[:2]outputs = detect_objects(img, net, output_layers)boxes, confidences, class_ids = get_box_dimensions(outputs, height, width)img = draw_labels(boxes, confidences, class_ids, classes, img)cv2.imshow("Object Detection", img)cv2.waitKey(0)cv2.destroyAllWindows()except Exception as e:print(f"发生错误: {str(e)}")

三、所有代码

import cv2
import numpy as np
import osdef load_yolo():# 获取当前目录current_dir = os.path.dirname(os.path.abspath(__file__))# 构建完整路径cfg_path = os.path.join(current_dir, "yolov3.cfg")weights_path = os.path.join(current_dir, "yolov3.weights")names_path = os.path.join(current_dir, "coco.names")# 检查文件是否存在if not all(os.path.exists(f) for f in [cfg_path, weights_path, names_path]):raise FileNotFoundError("缺少YOLO模型文件,请确保yolov3.cfg、yolov3.weights和coco.names在脚本目录下")# 加载网络net = cv2.dnn.readNet(weights_path, cfg_path)with open(names_path, "r") as f:classes = [line.strip() for line in f.readlines()]layer_names = net.getLayerNames()output_layers = [layer_names[i - 1] for i in net.getUnconnectedOutLayers()]return net, classes, output_layersdef detect_objects(img, net, output_layers):# 从图像创建blobblob = cv2.dnn.blobFromImage(img, scalefactor=1/255.0, size=(416, 416), swapRB=True, crop=False)# 设置输入并进行前向传播net.setInput(blob)outputs = net.forward(output_layers)return outputsdef get_box_dimensions(outputs, height, width):boxes = []confidences = []class_ids = []for output in outputs:for detection in output:scores = detection[5:]class_id = np.argmax(scores)confidence = scores[class_id]if confidence > 0.5:  # 置信度阈值# 计算边界框坐标center_x = int(detection[0] * width)center_y = int(detection[1] * height)w = int(detection[2] * width)h = int(detection[3] * height)# 矩形左上角坐标x = int(center_x - w / 2)y = int(center_y - h / 2)boxes.append([x, y, w, h])confidences.append(float(confidence))class_ids.append(class_id)return boxes, confidences, class_idsdef draw_labels(boxes, confidences, class_ids, classes, img):# 应用非极大值抑制indexes = cv2.dnn.NMSBoxes(boxes, confidences, 0.5, 0.4)# 设置颜色colors = np.random.uniform(0, 255, size=(len(classes), 3))if len(indexes) > 0:for i in indexes.flatten():x, y, w, h = boxes[i]label = str(classes[class_ids[i]])confidence = str(round(confidences[i], 2))color = colors[class_ids[i]]# 绘制边界框和标签cv2.rectangle(img, (x, y), (x + w, y + h), color, 2)cv2.putText(img, f"{label} {confidence}", (x, y - 5), cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2)return imgdef object_detection(image_path):try:net, classes, output_layers = load_yolo()img = cv2.imread(image_path)if img is None:raise FileNotFoundError(f"无法加载图像: {image_path}")height, width = img.shape[:2]outputs = detect_objects(img, net, output_layers)boxes, confidences, class_ids = get_box_dimensions(outputs, height, width)img = draw_labels(boxes, confidences, class_ids, classes, img)cv2.imshow("Object Detection", img)cv2.waitKey(0)cv2.destroyAllWindows()except Exception as e:print(f"发生错误: {str(e)}")if __name__ == "__main__":# 使用示例image_path = "myimg.jpg"  # 替换为你的图片路径object_detection(image_path)

四、效果演示

4.1  识别香蕉

4.2  识别手机

4.3  识别夜景中的汽车和人

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。
如若转载,请注明出处:http://www.pswp.cn/bicheng/88727.shtml
繁体地址,请注明出处:http://hk.pswp.cn/bicheng/88727.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

ABP VNext + OpenTelemetry + Jaeger:分布式追踪与调用链可视化

ABP VNext OpenTelemetry Jaeger:分布式追踪与调用链可视化 🚀 📚 目录ABP VNext OpenTelemetry Jaeger:分布式追踪与调用链可视化 🚀背景与动机 🌟环境与依赖 📦必装 NuGet 包系统架构概览…

C语言中整数编码方式(原码、反码、补码)

在 C 语言中,原码、反码、补码的运算规则与其编码特性密切相关,核心差异体现在符号位是否参与运算、进位如何处理以及减法是否能转化为加法等方面。以下是三者的运算规则及特点分析(以 8 位整数为例,符号位为最高位)&a…

js二维数组如何变为一维数组

在 JavaScript 中,将二维数组转换为一维数组(扁平化)有多种方法,可根据数组结构复杂度、性能需求和兼容性选择。以下是最常用的实现方式: 1. 使用 flat() 方法(ES2019) MDN释义:flat…

Claude code在Windows上的配置流程

前言 昨天在服务器上配置好了 Claude code,发现其编码性能和效率都非常不错。 然而,尝试用它修改带 UI 界面的客户端程序时颇为不便,因为服务器没有图形化界面,无法直接将应用界面直接显示到开发机上,调试起来颇为不…

手把手教你用YOLOv10打造智能垃圾检测系统

无需编程基础!手把手教你用YOLOv10打造智能垃圾检测系统 垃圾分类不再难,AI助手秒识别 你是否曾站在分类垃圾桶前犹豫不决?塑料瓶是可回收还是其他垃圾?外卖餐盒到底该丢哪里?随着垃圾分类政策推广,这样的困…

batchnorm类

1. 伪代码:2. python代码:3. 测试:4. 加深理解:以 为例,x3,可见输出的batchnorm后y0.2627.查看模型记录的均值及方差,计算y0.286799,理解是大致这样的计算过程。(为什么数…

SpringBoot项目保证接口幂等的五种方法!

1. 幂等概述 1.1 深入理解幂等性 在计算机领域中,幂等(Idempotence)是指任意一个操作的多次执行总是能获得相同的结果,不会对系统状态产生额外影响。在Java后端开发中,幂等性的实现通常通过确保方法或服务调用的结果…

SQL新手入门详细教程和应用实例

SQL(Structured Query Language)是用于管理和操作关系型数据库的标准语言。它允许你创建、查询、更新和删除数据。本教程将从基础概念开始,逐步引导你上手SQL,并提供详细的应用实例。教程基于标准SQL语法,实际使用时需根据数据库系统(如MySQL、SQLite或PostgreSQL)调整。…

DVWA-LOW级-SQL手工注入漏洞测试(MySQL数据库)+sqlmap自动化注入-小白必看(超详细)

首次使用DVWA的靶场,咋们先从最低级别的LOW开始,因为之前玩过一下墨者学院,对sql注入有一点认识和理解,所以先从sql的盲注开始; 1、测试注入点是否存在sql注入的漏洞; (1)首先我们…

JAVA线程池详解+学习笔记

1.线程池基础概念线程池是一种资源复用技术,通过预先创建并管理一组线程,减少频繁创建和销毁线程的开销。核心思想与数据库连接池、字符串常量池类似,旨在提升系统性能。核心参数解析ThreadPoolExecutor构造函数包含7个关键参数:c…

数据分析库 Pandas

对于Pandas的简单认识和基本操作的练习一 介绍 Pandas 是一个开源的数据分析和数据处理库,它是基于 Python 编程语言的库。 Pandas 提供了易于使用的数据结构和数据分析工具,特别适用于处理结构化数据,如表格型数据(类似于 Excel …

qt 中不要让 lambda 槽函数捕获信号源对象的共享指针

错误示例std::shared_ptr<QSerialPort> serial{new QSerialPort{}};QSerialPort::connect(serial.get(),&QSerialPort::readyRead,[serial](){QByteArray receive_data serial->readAll();std::cout.write(receive_data.data(), receive_data.size());});这会直接…

Solidity 合约的编写-完整开发流程:从编译、测试、部署到交互

&#x1f9f1; Solidity 合约开发全流程&#xff08;Foundry 版&#xff09;✅ 适合对象&#xff1a;已经能写合约但不清楚如何测试、部署、交互的开发者✅ 工具链&#xff1a;Foundry&#xff08;forge, anvil, cast&#xff09;&#x1f4cc; 开发流程总览1️⃣ 初始化项目 2…

设计模式 - 面向对象原则:SOLID最佳实践

文章目录深入理解 SOLID&#xff1a;用对原则&#xff0c;别把简单问题搞复杂SOLID 原则概览1. 单一职责原则&#xff08;SRP&#xff09;2. 开闭原则&#xff08;OCP&#xff09;3. 里氏替换原则&#xff08;LSP&#xff09;4. 接口隔离原则&#xff08;ISP&#xff09;5. 依赖…

Vue 3 中父组件内两个子组件相互传参的几种方法

方法一&#xff1a;通过父组件中转&#xff08;Props Emits&#xff09;<!-- ParentComponent.vue --> <template><ChildA :message-from-b"messageFromB" send-to-b"handleSendToB" /><ChildB :message-from-a"messageFromA&q…

三子棋游戏设计与实现(C 语言版)

一、需求分析目标&#xff1a;实现一个简单的人机对战三子棋&#xff0c;支持以下功能&#xff1a;初始化空棋盘&#xff0c;清晰展示落子状态。玩家通过坐标落子&#xff08;X 代表玩家&#xff09;&#xff0c;电脑随机落子&#xff08;O 代表电脑&#xff09;。实时判断胜负…

GD32 CAN1和TIMER0同时开启问题

背景&#xff1a;今天在一个项目调试的时候发现了一些问题&#xff0c;由此贴记录一下问题解决的过程。使用的芯片是GD32F305VE。使用到了CAN1和TIMER0。在使用这连个外设的时候发送了一些问题。单独使用CAN1。功能正常。单独使用TIMER0。配置为输出模式。功能正常。但是当两个…

剑指offer56_数组中唯一只出现一次的数字

数组中唯一只出现一次的数字在一个数组中除了一个数字只出现一次之外&#xff0c;其他数字都出现了三次。 请找出那个只出现一次的数字。 你可以假设满足条件的数字一定存在。 思考题&#xff1a; 如果要求只使用 O(n) 的时间和额外 O(1) 的空间&#xff0c;该怎么做呢&#xf…

从语音识别到智能助手:Voice Agent 的技术进化与交互变革丨Voice Agent 学习笔记

From Research AI&#xff1a; 最近看到 Andrew Ng 的一句话让我印象深刻&#xff1a;“While some things in AI are overhyped, voice applications seem underhyped right now.”&#xff08;尽管 AI 中有些领域被过度炒作&#xff0c;语音应用却似乎被低估了&#xff09;。…

什么是Jaccard 相似度(Jaccard Similarity)

文章目录✅ 定义&#xff1a;&#x1f4cc; 取值范围&#xff1a;&#x1f50d; 举例说明&#xff1a;&#x1f9e0; 应用场景&#xff1a;⚠️ 局限性&#xff1a;&#x1f4a1; 扩展概念&#xff1a;Jaccard 相似度&#xff08;Jaccard Similarity&#xff09; 是一种用于衡量…