深度学习核心:从基础到前沿的全面解析

🧠 深度学习核心:从基础到前沿的全面解析

🚀 探索深度学习的核心技术栈,从神经网络基础到最新的Transformer架构


📋 目录

  • 🔬 神经网络基础:从感知机到多层网络
  • 🖼️ 卷积神经网络(CNN):图像识别的利器
  • 🔄 循环神经网络(RNN/LSTM/GRU):序列数据处理
  • ⚡ 注意力机制与Transformer架构
  • 🎯 总结与展望

🔬 神经网络基础:从感知机到多层网络

🧮 感知机:神经网络的起点

感知机是最简单的神经网络模型,由Frank Rosenblatt在1957年提出。它模拟了生物神经元的基本功能。

import numpy as np
import matplotlib.pyplot as pltclass Perceptron:def __init__(self, learning_rate=0.01, n_iterations=1000):self.learning_rate = learning_rateself.n_iterations = n_iterationsdef fit(self, X, y):# 初始化权重和偏置self.weights = np.zeros(X.shape[1])self.bias = 0for _ in range(self.n_iterations):for idx, x_i in enumerate(X):# 计算线性输出linear_output = np.dot(x_i, self.weights) + self.bias# 激活函数(阶跃函数)y_predicted = self.activation_function(linear_output)# 更新权重和偏置update = self.learning_rate * (y[idx] - y_predicted)self.weights += update * x_iself.bias += updatedef predict(self, X):linear_output = np.dot(X, self.weights) + self.biaspredictions = self.activation_function(linear_output)return predictionsdef activation_function(self, x):return np.where(x >= 0, 1, 0)# 示例:AND门实现
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([0, 0, 0, 1])  # AND门真值表perceptron = Perceptron(learning_rate=0.1, n_iterations=10)
perceptron.fit(X, y)print("AND门预测结果:")
for i in range(len(X)):prediction = perceptron.predict(X[i].reshape(1, -1))print(f"输入: {X[i]}, 预测: {prediction[0]}, 实际: {y[i]}")

🏗️ 多层感知机(MLP)

多层感知机通过增加隐藏层,解决了单层感知机无法处理非线性问题的局限性。

import torch
import torch.nn as nn
import torch.optim as optim
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScalerclass MLP(nn.Module):def __init__(self, input_size, hidden_sizes, output_size, dropout_rate=0.2):super(MLP, self).__init__()layers = []prev_size = input_size# 构建隐藏层for hidden_size in hidden_sizes:layers.extend([nn.Linear(prev_size, hidden_size),nn.ReLU(),nn.BatchNorm1d(hidden_size),nn.Dropout(dropout_rate)])prev_size = hidden_size# 输出层layers.append(nn.Linear(prev_size, output_size))self.network = nn.Sequential(*layers)def forward(self, x):return self.network(x)# 生成示例数据
X, y = make_classification(n_samples=1000, n_features=20, n_classes=2, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)# 数据标准化
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)# 转换为PyTorch张量
X_train_tensor = torch.FloatTensor(X_train_scaled)
y_train_tensor = torch.LongTensor(y_train)
X_test_tensor = torch.FloatTensor(X_test_scaled)
y_test_tensor = torch.LongTensor(y_test)# 创建模型
model = MLP(input_size=20, hidden_sizes=[64, 32, 16], output_size=2)
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)# 训练模型
num_epochs = 100
for epoch in range(num_epochs):model.train()optimizer.zero_grad()outputs = model(X_train_tensor)loss = criterion(outputs, y_train_tensor)loss.backward()optimizer.step()if (epoch + 1) % 20 == 0:model.eval()with torch.no_grad():test_outputs = model(X_test_tensor)_, predicted = torch.max(test_outputs.data, 1)accuracy = (predicted == y_test_tensor).sum().item() / len(y_test_tensor)print(f'Epoch [{epoch+1}/{num_epochs}], Loss: {loss.item():.4f}, Test Accuracy: {accuracy:.4f}')

🎯 激活函数详解

激活函数为神经网络引入非线性,是深度学习的关键组件。

import numpy as np
import matplotlib.pyplot as pltdef sigmoid(x):return 1 / (1 + np.exp(-np.clip(x, -500, 500)))def tanh(x):return np.tanh(x)def relu(x):return np.maximum(0, x)def leaky_relu(x, alpha=0.01):return np.where(x > 0, x, alpha * x)def swish(x):return x * sigmoid(x)def gelu(x):return 0.5 * x * (1 + np.tanh(np.sqrt(2 / np.pi) * (x + 0.044715 * x**3)))# 绘制激活函数
x = np.linspace(-5, 5, 1000)plt.figure(figsize=(15, 10))activations = {'Sigmoid': sigmoid,'Tanh': tanh,'ReLU': relu,'Leaky ReLU': leaky_relu,'Swish': swish,'GELU': gelu
}for i, (name, func) in enumerate(activations.items(), 1):plt.subplot(2, 3, i)plt.plot(x, func(x), linewidth=2)plt.title(f'{name} Activation Function')plt.grid(True, alpha=0.3)plt.xlabel('Input')plt.ylabel('Output')plt.tight_layout()
plt.show()# 激活函数的导数(用于反向传播)
def sigmoid_derivative(x):s = sigmoid(x)return s * (1 - s)def relu_derivative(x):return np.where(x > 0, 1, 0)def leaky_relu_derivative(x, alpha=0.01):return np.where(x > 0, 1, alpha)

🖼️ 卷积神经网络(CNN):图像识别的利器 {#卷积神经网络}

🔍 卷积层原理

卷积神经网络通过卷积操作提取图像的局部特征,具有平移不变性和参数共享的优势。

import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision.transforms as transforms
from torchvision.datasets import CIFAR10
from torch.utils.data import DataLoaderclass ConvBlock(nn.Module):def __init__(self, in_channels, out_channels, kernel_size=3, stride=1, padding=1):super(ConvBlock, self).__init__()self.conv = nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding)self.bn = nn.BatchNorm2d(out_channels)self.relu = nn.ReLU(inplace=True)def forward(self, x):return self.relu(self.bn(self.conv(x)))class SimpleCNN(nn.Module):def __init__(self, num_classes=10):super(SimpleCNN, self).__init__()# 特征提取层self.features = nn.Sequential(ConvBlock(3, 32),ConvBlock(32, 32),nn.MaxPool2d(2, 2),nn.Dropout2d(0.25),ConvBlock(32, 64),ConvBlock(64, 64),nn.MaxPool2d(2, 2),nn.Dropout2d(0.25),ConvBlock(64, 128),ConvBlock(128, 128),nn.MaxPool2d(2, 2),nn.Dropout2d(0.25))# 分类器self.classifier = nn.Sequential(nn.AdaptiveAvgPool2d((1, 1)),nn.Flatten(),nn.Linear(128, 512),nn.ReLU(inplace=True),nn.Dropout(0.5),nn.Linear(512, num_classes))def forward(self, x):x = self.features(x)x = self.classifier(x)return x# 数据预处理
transform_train = transforms.Compose([transforms.RandomCrop(32, padding=4),transforms.RandomHorizontalFlip(),transforms.ToTensor(),transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))
])transform_test = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))
])# 加载CIFAR-10数据集
train_dataset = CIFAR10(root='./data', train=True, download=True, transform=transform_train)
test_dataset = CIFAR10(root='./data', train=False, download=True, transform=transform_test)train_loader = DataLoader(train_dataset, batch_size=128, shuffle=True, num_workers=2)
test_loader = DataLoader(test_dataset, batch_size=100, shuffle=False, num_workers=2)# 训练函数
def train_model(model, train_loader, test_loader, num_epochs=10):device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')model.to(device)criterion = nn.CrossEntropyLoss()optimizer = torch.optim.Adam(model.parameters(), lr=0.001, weight_decay=1e-4)scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=7, gamma=0.1)for epoch in range(num_epochs):# 训练阶段model.train()running_loss = 0.0correct = 0total = 0for batch_idx, (data, target) in enumerate(train_loader):data, target = data.to(device), target.to(device)optimizer.zero_grad()output = model(data)loss = criterion(output, target)loss.backward()optimizer.step()running_loss += loss.item()_, predicted = output.max(1)total += target.size(0)correct += predicted.eq(target).sum().item()if batch_idx % 100 == 0:print(f'Epoch: {epoch+1}, Batch: {batch_idx}, Loss: {loss.item():.4f}')scheduler.step()# 测试阶段model.eval()test_loss = 0test_correct = 0test_total = 0with torch.no_grad():for data, target in test_loader:data, target = data.to(device), target.to(device)output = model(data)test_loss += criterion(output, target).item()_, predicted = output.max(1)test_total += target.size(0)test_correct += predicted.eq(target).sum().item()train_acc = 100. * correct / totaltest_acc = 100. * test_correct / test_totalprint(f'Epoch {epoch+1}: Train Acc: {train_acc:.2f}%, Test Acc: {test_acc:.2f}%')# 创建并训练模型
model = SimpleCNN(num_classes=10)
print("开始训练CNN模型...")
# train_model(model, train_loader, test_loader, num_epochs=5)

🏛️ 经典CNN架构

LeNet-5:CNN的先驱
class LeNet5(nn.Module):def __init__(self, num_classes=10):super(LeNet5, self).__init__()self.features = nn.Sequential(nn.Conv2d(1, 6, kernel_size=5),nn.Tanh(),nn.AvgPool2d(kernel_size=2),nn.Conv2d(6, 16, kernel_size=5),nn.Tanh(),nn.AvgPool2d(kernel_size=2))self.classifier = nn.Sequential(nn.Linear(16 * 5 * 5, 120),nn.Tanh(),nn.Linear(120, 84),nn.Tanh(),nn.Linear(84, num_classes))def forward(self, x):x = self.features(x)x = x.view(x.size(0), -1)x = self.classifier(x)return x
ResNet:残差网络
class ResidualBlock(nn.Module):def __init__(self, in_channels, out_channels, stride=1, downsample=None):super(ResidualBlock, self).__init__()self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=stride, padding=1, bias=False)self.bn1 = nn.BatchNorm2d(out_channels)self.relu = nn.ReLU(inplace=True)self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3,stride=1, padding=1, bias=False)self.bn2 = nn.BatchNorm2d(out_channels)self.downsample = downsampledef forward(self, x):identity = xout = self.conv1(x)out = self.bn1(out)out = self.relu(out)out = self.conv2(out)out = self.bn2(out)if self.downsample is not None:identity = self.downsample(x)out += identity  # 残差连接out = self.relu(out)return outclass ResNet(nn.Module):def __init__(self, block, layers, num_classes=1000):super(ResNet, self).__init__()self.in_channels = 64self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False)self.bn1 = nn.BatchNorm2d(64)self.relu = nn.ReLU(inplace=True)self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)self.layer1 = self._make_layer(block, 64, layers[0])self.layer2 = self._make_layer(block, 128, layers[1], stride=2)self.layer3 = self._make_layer(block, 256, layers[2], stride=2)self.layer4 = self._make_layer(block, 512, layers[3], stride=2)self.avgpool = nn.AdaptiveAvgPool2d((1, 1))self.fc = nn.Linear(512, num_classes)def _make_layer(self, block, out_channels, blocks, stride=1):downsample = Noneif stride != 1 or self.in_channels != out_channels:downsample = nn.Sequential(nn.Conv2d(self.in_channels, out_channels, kernel_size=1, stride=stride, bias=False),nn.BatchNorm2d(out_channels))layers = []layers.append(block(self.in_channels, out_channels, stride, downsample))self.in_channels = out_channelsfor _ in range(1, blocks):layers.append(block(out_channels, out_channels))return nn.Sequential(*layers)def forward(self, x):x = self.conv1(x)x = self.bn1(x)x = self.relu(x)x = self.maxpool(x)x = self.layer1(x)x = self.layer2(x)x = self.layer3(x)x = self.layer4(x)x = self.avgpool(x)x = torch.flatten(x, 1)x = self.fc(x)return x# 创建ResNet-18
def resnet18(num_classes=1000):return ResNet(ResidualBlock, [2, 2, 2, 2], num_classes)

🔄 循环神经网络(RNN/LSTM/GRU):序列数据处理

🔗 基础RNN

循环神经网络专门处理序列数据,具有记忆能力。

import torch
import torch.nn as nn
import numpy as np
import matplotlib.pyplot as pltclass SimpleRNN(nn.Module):def __init__(self, input_size, hidden_size, output_size, num_layers=1):super(SimpleRNN, self).__init__()self.hidden_size = hidden_sizeself.num_layers = num_layersself.rnn = nn.RNN(input_size, hidden_size, num_layers, batch_first=True)self.fc = nn.Linear(hidden_size, output_size)def forward(self, x):# 初始化隐藏状态h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size)# RNN前向传播out, _ = self.rnn(x, h0)# 只使用最后一个时间步的输出out = self.fc(out[:, -1, :])return out# 生成正弦波数据用于时间序列预测
def generate_sine_wave(seq_length, num_samples):X, y = [], []for _ in range(num_samples):start = np.random.uniform(0, 100)x = np.linspace(start, start + seq_length, seq_length)sine_wave = np.sin(x)X.append(sine_wave[:-1])  # 输入序列y.append(sine_wave[-1])   # 预测目标return np.array(X), np.array(y)# 生成训练数据
seq_length = 20
num_samples = 1000
X_train, y_train = generate_sine_wave(seq_length, num_samples)# 转换为PyTorch张量
X_train = torch.FloatTensor(X_train).unsqueeze(-1)  # 添加特征维度
y_train = torch.FloatTensor(y_train)# 创建和训练模型
model = SimpleRNN(input_size=1, hidden_size=50, output_size=1)
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)# 训练循环
num_epochs = 100
for epoch in range(num_epochs):model.train()optimizer.zero_grad()outputs = model(X_train)loss = criterion(outputs.squeeze(), y_train)loss.backward()optimizer.step()if (epoch + 1) % 20 == 0:print(f'Epoch [{epoch+1}/{num_epochs}], Loss: {loss.item():.6f}')

🧠 LSTM:长短期记忆网络

LSTM通过门控机制解决了传统RNN的梯度消失问题。

class LSTMModel(nn.Module):def __init__(self, input_size, hidden_size, num_layers, output_size, dropout=0.2):super(LSTMModel, self).__init__()self.hidden_size = hidden_sizeself.num_layers = num_layersself.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True, dropout=dropout)self.dropout = nn.Dropout(dropout)self.fc = nn.Linear(hidden_size, output_size)def forward(self, x):# 初始化隐藏状态和细胞状态h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size)c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size)# LSTM前向传播out, (hn, cn) = self.lstm(x, (h0, c0))# 应用dropoutout = self.dropout(out)# 使用最后一个时间步的输出out = self.fc(out[:, -1, :])return out# 文本分类示例
class TextClassificationLSTM(nn.Module):def __init__(self, vocab_size, embedding_dim, hidden_dim, output_dim, n_layers=2, dropout=0.3):super(TextClassificationLSTM, self).__init__()self.embedding = nn.Embedding(vocab_size, embedding_dim)self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, batch_first=True, dropout=dropout)self.dropout = nn.Dropout(dropout)self.fc = nn.Linear(hidden_dim, output_dim)def forward(self, x):# 词嵌入embedded = self.embedding(x)# LSTM处理lstm_out, (hidden, cell) = self.lstm(embedded)# 使用最后一个隐藏状态output = self.dropout(hidden[-1])output = self.fc(output)return output# 双向LSTM
class BiLSTM(nn.Module):def __init__(self, input_size, hidden_size, num_layers, output_size):super(BiLSTM, self).__init__()self.hidden_size = hidden_sizeself.num_layers = num_layersself.lstm = nn.LSTM(input_size, hidden_size, num_layers,batch_first=True, bidirectional=True)self.fc = nn.Linear(hidden_size * 2, output_size)  # *2 因为是双向def forward(self, x):# 初始化隐藏状态(双向需要 *2)h0 = torch.zeros(self.num_layers * 2, x.size(0), self.hidden_size)c0 = torch.zeros(self.num_layers * 2, x.size(0), self.hidden_size)out, _ = self.lstm(x, (h0, c0))# 连接前向和后向的最后输出out = self.fc(out[:, -1, :])return out

⚡ GRU:门控循环单元

GRU是LSTM的简化版本,参数更少但性能相近。

class GRUModel(nn.Module):def __init__(self, input_size, hidden_size, num_layers, output_size, dropout=0.2):super(GRUModel, self).__init__()self.hidden_size = hidden_sizeself.num_layers = num_layersself.gru = nn.GRU(input_size, hidden_size, num_layers,batch_first=True, dropout=dropout)self.dropout = nn.Dropout(dropout)self.fc = nn.Linear(hidden_size, output_size)def forward(self, x):# 初始化隐藏状态h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size)# GRU前向传播out, _ = self.gru(x, h0)# 应用dropout和全连接层out = self.dropout(out[:, -1, :])out = self.fc(out)return out# 序列到序列模型(Seq2Seq)
class Seq2SeqGRU(nn.Module):def __init__(self, input_size, hidden_size, output_size, num_layers=1):super(Seq2SeqGRU, self).__init__()# 编码器self.encoder = nn.GRU(input_size, hidden_size, num_layers, batch_first=True)# 解码器self.decoder = nn.GRU(output_size, hidden_size, num_layers, batch_first=True)# 输出层self.output_projection = nn.Linear(hidden_size, output_size)def forward(self, src, tgt=None, max_length=50):batch_size = src.size(0)# 编码_, hidden = self.encoder(src)if self.training and tgt is not None:# 训练时使用teacher forcingdecoder_output, _ = self.decoder(tgt, hidden)output = self.output_projection(decoder_output)else:# 推理时逐步生成outputs = []decoder_input = torch.zeros(batch_size, 1, self.output_projection.out_features)for _ in range(max_length):decoder_output, hidden = self.decoder(decoder_input, hidden)output = self.output_projection(decoder_output)outputs.append(output)decoder_input = outputoutput = torch.cat(outputs, dim=1)return output

⚡ 注意力机制与Transformer架构

🎯 注意力机制原理

注意力机制允许模型在处理序列时关注最相关的部分。

import torch
import torch.nn as nn
import torch.nn.functional as F
import mathclass ScaledDotProductAttention(nn.Module):def __init__(self, d_model, dropout=0.1):super(ScaledDotProductAttention, self).__init__()self.d_model = d_modelself.dropout = nn.Dropout(dropout)def forward(self, query, key, value, mask=None):# 计算注意力分数scores = torch.matmul(query, key.transpose(-2, -1)) / math.sqrt(self.d_model)# 应用掩码(如果提供)if mask is not None:scores = scores.masked_fill(mask == 0, -1e9)# 计算注意力权重attention_weights = F.softmax(scores, dim=-1)attention_weights = self.dropout(attention_weights)# 应用注意力权重output = torch.matmul(attention_weights, value)return output, attention_weightsclass MultiHeadAttention(nn.Module):def __init__(self, d_model, num_heads, dropout=0.1):super(MultiHeadAttention, self).__init__()assert d_model % num_heads == 0self.d_model = d_modelself.num_heads = num_headsself.d_k = d_model // num_headsself.w_q = nn.Linear(d_model, d_model)self.w_k = nn.Linear(d_model, d_model)self.w_v = nn.Linear(d_model, d_model)self.w_o = nn.Linear(d_model, d_model)self.attention = ScaledDotProductAttention(self.d_k, dropout)def forward(self, query, key, value, mask=None):batch_size = query.size(0)# 线性变换并重塑为多头Q = self.w_q(query).view(batch_size, -1, self.num_heads, self.d_k).transpose(1, 2)K = self.w_k(key).view(batch_size, -1, self.num_heads, self.d_k).transpose(1, 2)V = self.w_v(value).view(batch_size, -1, self.num_heads, self.d_k).transpose(1, 2)# 应用注意力attn_output, attn_weights = self.attention(Q, K, V, mask)# 连接多头输出attn_output = attn_output.transpose(1, 2).contiguous().view(batch_size, -1, self.d_model)# 最终线性变换output = self.w_o(attn_output)return output, attn_weights

🏗️ Transformer架构

class PositionalEncoding(nn.Module):def __init__(self, d_model, max_length=5000):super(PositionalEncoding, self).__init__()pe = torch.zeros(max_length, d_model)position = torch.arange(0, max_length, dtype=torch.float).unsqueeze(1)div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-math.log(10000.0) / d_model))pe[:, 0::2] = torch.sin(position * div_term)pe[:, 1::2] = torch.cos(position * div_term)pe = pe.unsqueeze(0).transpose(0, 1)self.register_buffer('pe', pe)def forward(self, x):return x + self.pe[:x.size(0), :]class TransformerBlock(nn.Module):def __init__(self, d_model, num_heads, d_ff, dropout=0.1):super(TransformerBlock, self).__init__()self.attention = MultiHeadAttention(d_model, num_heads, dropout)self.norm1 = nn.LayerNorm(d_model)self.norm2 = nn.LayerNorm(d_model)self.feed_forward = nn.Sequential(nn.Linear(d_model, d_ff),nn.ReLU(),nn.Dropout(dropout),nn.Linear(d_ff, d_model))self.dropout = nn.Dropout(dropout)def forward(self, x, mask=None):# 多头自注意力 + 残差连接attn_output, _ = self.attention(x, x, x, mask)x = self.norm1(x + self.dropout(attn_output))# 前馈网络 + 残差连接ff_output = self.feed_forward(x)x = self.norm2(x + self.dropout(ff_output))return xclass TransformerEncoder(nn.Module):def __init__(self, vocab_size, d_model, num_heads, num_layers, d_ff, max_length=5000, dropout=0.1):super(TransformerEncoder, self).__init__()self.d_model = d_modelself.embedding = nn.Embedding(vocab_size, d_model)self.pos_encoding = PositionalEncoding(d_model, max_length)self.transformer_blocks = nn.ModuleList([TransformerBlock(d_model, num_heads, d_ff, dropout)for _ in range(num_layers)])self.dropout = nn.Dropout(dropout)def forward(self, x, mask=None):# 词嵌入 + 位置编码x = self.embedding(x) * math.sqrt(self.d_model)x = self.pos_encoding(x)x = self.dropout(x)# 通过Transformer块for transformer in self.transformer_blocks:x = transformer(x, mask)return x# 用于分类任务的完整Transformer模型
class TransformerClassifier(nn.Module):def __init__(self, vocab_size, d_model, num_heads, num_layers, d_ff, num_classes, max_length=512, dropout=0.1):super(TransformerClassifier, self).__init__()self.encoder = TransformerEncoder(vocab_size, d_model, num_heads, num_layers, d_ff, max_length, dropout)self.classifier = nn.Sequential(nn.Linear(d_model, d_model // 2),nn.ReLU(),nn.Dropout(dropout),nn.Linear(d_model // 2, num_classes))def forward(self, x, mask=None):# 编码encoded = self.encoder(x, mask)# 全局平均池化pooled = encoded.mean(dim=1)# 分类output = self.classifier(pooled)return output# 创建模型示例
model = TransformerClassifier(vocab_size=10000,d_model=512,num_heads=8,num_layers=6,d_ff=2048,num_classes=2,max_length=512,dropout=0.1
)print(f"模型参数数量: {sum(p.numel() for p in model.parameters()):,}")

🎨 Vision Transformer (ViT)

将Transformer应用于计算机视觉任务。

class PatchEmbedding(nn.Module):def __init__(self, img_size=224, patch_size=16, in_channels=3, embed_dim=768):super(PatchEmbedding, self).__init__()self.img_size = img_sizeself.patch_size = patch_sizeself.num_patches = (img_size // patch_size) ** 2self.projection = nn.Conv2d(in_channels, embed_dim, kernel_size=patch_size, stride=patch_size)def forward(self, x):# x: (batch_size, channels, height, width)x = self.projection(x)  # (batch_size, embed_dim, num_patches_h, num_patches_w)x = x.flatten(2)        # (batch_size, embed_dim, num_patches)x = x.transpose(1, 2)   # (batch_size, num_patches, embed_dim)return xclass VisionTransformer(nn.Module):def __init__(self, img_size=224, patch_size=16, in_channels=3, num_classes=1000,embed_dim=768, num_heads=12, num_layers=12, mlp_ratio=4, dropout=0.1):super(VisionTransformer, self).__init__()self.patch_embedding = PatchEmbedding(img_size, patch_size, in_channels, embed_dim)num_patches = self.patch_embedding.num_patches# 类别token和位置编码self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim))self.pos_embedding = nn.Parameter(torch.zeros(1, num_patches + 1, embed_dim))# Transformer编码器self.transformer_blocks = nn.ModuleList([TransformerBlock(embed_dim, num_heads, int(embed_dim * mlp_ratio), dropout)for _ in range(num_layers)])self.norm = nn.LayerNorm(embed_dim)self.head = nn.Linear(embed_dim, num_classes)self.dropout = nn.Dropout(dropout)def forward(self, x):batch_size = x.shape[0]# 图像分块和嵌入x = self.patch_embedding(x)# 添加类别tokencls_tokens = self.cls_token.expand(batch_size, -1, -1)x = torch.cat([cls_tokens, x], dim=1)# 添加位置编码x = x + self.pos_embeddingx = self.dropout(x)# 通过Transformer块for transformer in self.transformer_blocks:x = transformer(x)# 归一化并分类x = self.norm(x)cls_token_final = x[:, 0]  # 使用类别tokenoutput = self.head(cls_token_final)return output# 创建ViT模型
vit_model = VisionTransformer(img_size=224,patch_size=16,in_channels=3,num_classes=1000,embed_dim=768,num_heads=12,num_layers=12
)print(f"ViT模型参数数量: {sum(p.numel() for p in vit_model.parameters()):,}")

🎯 总结与展望

📊 深度学习技术对比

技术优势劣势适用场景
CNN平移不变性、参数共享、局部特征提取对旋转和缩放敏感图像识别、计算机视觉
RNN/LSTM/GRU处理序列数据、记忆能力梯度消失、并行化困难自然语言处理、时间序列
Transformer并行化、长距离依赖、注意力机制计算复杂度高、需要大量数据机器翻译、文本生成、多模态

🚀 未来发展趋势

1. 模型效率优化
  • 模型压缩:知识蒸馏、剪枝、量化
  • 轻量化架构:MobileNet、EfficientNet、DistilBERT
  • 神经架构搜索:AutoML、NAS
2. 多模态融合
  • 视觉-语言模型:CLIP、DALL-E、GPT-4V
  • 跨模态理解:图像描述、视觉问答
  • 统一架构:通用多模态Transformer
3. 自监督学习
  • 对比学习:SimCLR、MoCo、SwAV
  • 掩码语言模型:BERT、RoBERTa、DeBERTa
  • 生成式预训练:GPT系列、T5

💡 实践建议

🎯 选择合适的架构
# 根据任务选择模型的决策树
def choose_model(task_type, data_type, data_size):if data_type == "image":if task_type == "classification":if data_size == "small":return "ResNet-18 或 EfficientNet-B0"else:return "ResNet-50/101 或 EfficientNet-B3/B5"elif task_type == "detection":return "YOLO 或 R-CNN 系列"elif task_type == "segmentation":return "U-Net 或 DeepLab"elif data_type == "text":if task_type == "classification":if data_size == "small":return "LSTM 或 简单CNN"else:return "BERT 或 RoBERTa"elif task_type == "generation":return "GPT 或 T5"elif task_type == "translation":return "Transformer 或 mBART"elif data_type == "sequence":if task_type == "forecasting":return "LSTM 或 Transformer"elif task_type == "anomaly_detection":return "Autoencoder 或 LSTM-VAE"return "请提供更多信息"# 示例使用
print(choose_model("classification", "image", "large"))
print(choose_model("generation", "text", "large"))
🔧 训练技巧
# 深度学习训练最佳实践
class TrainingBestPractices:@staticmethoddef setup_training():tips = {"数据预处理": ["数据归一化/标准化","数据增强(图像旋转、文本回译等)","处理类别不平衡","验证数据质量"],"模型设计": ["使用预训练模型","添加正则化(Dropout、BatchNorm)","合理设计网络深度和宽度","使用残差连接"],"训练策略": ["学习率调度(余弦退火、步长衰减)","梯度裁剪防止梯度爆炸","早停防止过拟合","混合精度训练加速"],"优化器选择": ["Adam:通用选择","AdamW:Transformer推荐","SGD+Momentum:CNN经典选择","RAdam:鲁棒的Adam变体"]}return tips# 打印训练建议
for category, tips in TrainingBestPractices.setup_training().items():print(f"\n**{category}:**")for tip in tips:print(f"  • {tip}")

🌟 结语

深度学习正在快速发展,从基础的神经网络到复杂的Transformer架构,每一项技术都在推动AI的边界。掌握这些核心技术不仅需要理解理论原理,更需要大量的实践经验。


深度学习的未来充满无限可能,让我们一起在这个激动人心的领域中不断探索和创新! 🚀✨

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。
如若转载,请注明出处:http://www.pswp.cn/web/88289.shtml
繁体地址,请注明出处:http://hk.pswp.cn/web/88289.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

MySQL索引:数据库的超级目录

MySQL索引:数据库的「超级目录」 想象你有一本1000页的百科全书,要快速找到某个知识点(如“光合作用”): ❌ 无索引:逐页翻找 → 全表扫描(慢!)✅ 有索引:直接…

景观桥 涵洞 城门等遮挡物对汽车安全性的影响数学建模和计算方法,需要收集那些数据

对高速公路景观桥影响行车视距的安全问题进行数学建模,需要将物理几何、动力学、概率统计和交通流理论结合起来。以下是分步骤的建模思路和关键模型:一、 核心建模目标 量化视距(Sight Distance, SD):计算实际可用视距…

Git 用户名和邮箱配置指南:全局与项目级设置

查看全局配置 git config --global user.name # 查看全局name配置 git config --global user.email # 查看全局email配置 git config --global --list # 查看所有全局配置查看当前项目配置 git config user.name # 查看当前项目name配置 git config user.email # 查看当前项目…

视频序列和射频信号多模态融合算法Fusion-Vital解读

视频序列和射频信号多模态融合算法Fusion-Vital解读概述模型整体流程视频帧时间差分归一化TSM模块视频序列特征融合模块跨模态特征融合模块概述 最近看了Fusion-Vital的视频-射频(RGB-RF)融合Transformer模型。记录一下,对于实际项目中的多模…

frp内网穿透下创建FTP(解决FTP“服务器回应不可路由的地址。使用服务器地址替代”错误)

使用宝塔面板,点击FTP,下载Pure-FTPd插件 点击Pure-FTPd插件,修改配置文件,找到PassivePortRange, 修改ftp被动端口范围为39000 39003,我们只需要4个被动端口即可,多了不好在内网穿透frp的配置文件中增加…

STM32控制四自由度机械臂(SG90舵机)(硬件篇)(简单易复刻)

1.前期硬件准备 2s锂电池一个(用于供电),stm32f103c8t6最小系统板一个(主控板),两个摇杆(用于摇杆模式),四个电位器(用于示教器模式)&#xff0c…

华为OD机试_2025 B卷_最差产品奖(Python,100分)(附详细解题思路)

题目描述 A公司准备对他下面的N个产品评选最差奖, 评选的方式是首先对每个产品进行评分,然后根据评分区间计算相邻几个产品中最差的产品。 评选的标准是依次找到从当前产品开始前M个产品中最差的产品,请给出最差产品的评分序列。 输入描述 第…

飞算JavaAI:重塑Java开发效率的智能引擎

飞算JavaAI:重塑Java开发效率的智能引擎 一、飞算JavaAI核心价值 飞算JavaAI是全球首款专注Java语言的智能开发助手,由飞算数智科技(深圳)有限公司研发。它通过AI大模型技术实现: 全流程自动化:从需求分析→软件设计→代码生成一气呵成工程级代码输出:生成包含配置类、…

Java和Go各方面对比:现代编程语言的深度分析

Java和Go各方面对比:现代编程语言的深度分析 引言 在当今的软件开发领域,选择合适的编程语言对项目的成功至关重要。Java作为一门成熟的面向对象语言,已经在企业级开发中占据主导地位超过25年。而Go(Golang)作为Google…

CloudCanal:一款企业级实时数据同步、迁移工具

CloudCanal 是一款可视化的数据同步、迁移工具,可以帮助企业构建高质量数据管道,具备实时高效、精确互联、稳定可拓展、一站式、混合部署、复杂数据转换等优点。 应用场景 CloudCanal 可以帮助企业实现以下数据应用场景: 数据同步&#xff…

如何发现 Redis 中的 BigKey?

如何发现 Redis 中的 BigKey? Redis 因其出色的性能,常被用作缓存、消息队列和会话存储。然而,在 Redis 的使用过程中,BigKey 是一个不容忽视的问题。BigKey 指的是存储了大量数据或包含大量成员的键。它们不仅会占用大量内存&…

Golang读取ZIP压缩包并显示Gin静态html网站

Golang读取ZIP压缩包并显示Gin静态html网站Golang读取ZIP压缩包并显示Gin静态html网站1. 读取ZIP压缩包2. 解压并保存静态文件3. 设置Gin静态文件服务基本静态文件服务使用StaticFS更精细控制单个静态文件服务4. 完整实现示例5. 高级优化内存映射优化使用Gin-Static中间件6. 部…

参数列表分类法:基本参数与扩展参数的设计模式

摘要 本文提出了我设计的一种新的函数参数设计范式——参数列表分类法,将传统的"单一参数列表"扩展为"多参数列表协同"模式。通过引入"基本参数列表"和"扩展参数列表"的概念,为复杂对象构建提供了更灵活、更具表…

Ajax之核心语法详解

Ajax之核心语法详解一、Ajax的核心原理与优势1.1 什么是Ajax?1.2 Ajax的优势二、XMLHttpRequest:Ajax的核心对象2.1 XHR的基本使用流程2.2 核心属性与事件解析2.2.1 readyState:请求状态2.2.2 status:HTTP状态码2.2.3 响应数据属性…

ArcGIS 打开 nc 降雨量文件

1. 打开ArcToolbox,依次打开 多维工具 → 创建 NetCDF 栅格图层,将 nc 文件拖入 输入 NetCDF 文件输入框,确认 X维度(经度)、Y维度(经度) 的变量名是否正确,点击 确定。图 1 加载nc文…

01-elasticsearch-搭个简单的window服务-ik分词器-简单使用

1、elasticsearch下载地址 如果是其他版本可以尝试修改链接中的版本信息下载 https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.6.2-windows-x86_64.zip 2、ik分词器下载地址 ik分词器下载的所有版本地址:Index of: analysis-ik/stable/…

[数据结构与算法] 优先队列 | 最小堆 C++

下面是关于 C 中 std::priority_queue 的详细说明,包括初始化、用法和常见的应用场景。什么是 priority_queue? priority_queue(优先队列)是 C 标准库中的一个容器适配器。它和普通队列(queue)最大的不同在…

零基础入门物联网-远程门禁开关:硬件介绍

一、成品展示 远程门禁最终效果 二、项目介绍 整个项目主要是实际使用案例为主,根据自己日常生活中用到物联网作品为原型,通过项目实例快速理解。项目分为两部分:制作体验和深入学习。 制作体验部分 会提供所有项目资料及制作说明文档&a…

软件系统国产化改造开发层面,达梦(DM)数据库改造问题记录

本系统前(vue)后端(java spring boot)为列子,数据库由MySQL--->DM(达梦),中间件为中创的国产化相关软件,如tomcat、nginx、redis等。重点讲数据库及代码端的更改,中间件在服务端以…

N8N与Dify:自动化与AI的完美搭配

“N8N”和“Dify”这两个工具彻底理清楚,它们其实是两个定位完全不同的开源平台,各自擅长解决不同类型的问题,但也能协同工作。以下是详细说明:1. n8n:工作流自动化平台定位:n8n 是一个专注于跨系统连接与任…