本篇文章引用的网络框架来自于”听 风、“的博客文章:《图像去雨:超详细手把手写 pytorch 实现代码(带注释)》(https://blog.csdn.net/Wenyuanbo/article/details/116541682),数据集采用的是Kaggle上的JRDR – Deraining Dataset的Light数据集。
在以上基础上,使用了更优化的dataset方法,以使input和label的图片能准确匹配;同时对网络层进行了优化:加入BatchNormalized模块。
代码展示(包含注释):
训练数据集与测试数据集的建立
import os
import torchvision.transforms as transforms
from torch.utils.data import Dataset
from PIL import Image
import torch.optim as optim
import torch
import torch.nn as nn
import torch.nn.functional as F
import matplotlib.pyplot as plt
from torch.utils.data import DataLoader
from tqdm.auto import tqdm
import numpy as np
import re
'''
Dataset for Training.
'''
class MyTrainDataset(Dataset):
def __init__(self, input_path, label_path):
self.input_path = input_path
self.input_files = os.listdir(input_path)
self.label_path = label_path
self.label_files = os.listdir(label_path)
'''
截取图像中间区域64*64的部分,然后将这部分转换成tensor类型
'''
self.transforms = transforms.Compose([
transforms.CenterCrop([64, 64]),
transforms.ToTensor(),
])
def __len__(self):
return len(self.input_files)
def __getitem__(self, index):
label_image_path = os.path.join(self.label_path, self.label_files[index])
label_image = Image.open(label_image_path).convert('RGB')
'''
Ensure input and label are in couple.
此处是将原本标签文件中每一个文件名的最后四位取出,然后给输入文件按照取出来的四位符号加上'x2.png'重命名,使得标签与输入是成对的
'''
temp = self.label_files[index][-4:]
self.input_files[index] = temp + 'x2.png'
input_image_path = os.path.join(self.input_path, self.input_files[index])
input_image = Image.open(input_image_path).convert('RGB')
input = self.transforms(input_image)
label = self.transforms(label_image)
return input, label
'''
Dataset for testing.
'''
class MyValidDataset(Dataset):
def __init__(self, input_path, label_path):
self.input_path = input_path
self.input_files = os.listdir(input_path)
self.label_path = label_path
self.label_files = os.listdir(label_path)
self.transforms = transforms.Compose([
transforms.CenterCrop([64, 64]),
transforms.ToTensor(),
])
def __len__(self):
return len(self.input_files)
def __getitem__(self, index):
label_image_path = os.path.join(self.label_path, self.label_files[index])
label_image = Image.open(label_image_path).convert('RGB')
temp = self.label_files[index][-4:]
self.input_files[index] = temp + 'x2.png'
input_image_path = os.path.join(self.input_path, self.input_files[index])
input_image = Image.open(input_image_path).convert('RGB')
input = self.transforms(input_image)
label = self.transforms(label_image)
return input, label
去雨网络结构
”’
Residual_Network with BatchNormalized.
以下为神经网络模型
”’
class Net(nn.Module):
def init(self):
super(Net, self).init()
self.conv0 = nn.Sequential(
nn.Conv2d(6, 32, 3, 1, 1),
nn.BatchNorm2d(32),
nn.ReLU()
)
self.res_conv1 = nn.Sequential(
nn.Conv2d(32, 32, 3, 1, 1),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.Conv2d(32, 32, 3, 1, 1),
nn.BatchNorm2d(32),
nn.ReLU()
)
self.res_conv2 = nn.Sequential(
nn.Conv2d(32, 32, 3, 1, 1),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.Conv2d(32, 32, 3, 1, 1),
nn.BatchNorm2d(32),
nn.ReLU()
)
self.res_conv3 = nn.Sequential(
nn.Conv2d(32, 32, 3, 1, 1),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.Conv2d(32, 32, 3, 1, 1),
nn.BatchNorm2d(32),
nn.ReLU()
)
self.res_conv4 = nn.Sequential(
nn.Conv2d(32, 32, 3, 1, 1),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.Conv2d(32, 32, 3, 1, 1),
nn.BatchNorm2d(32),
nn.ReLU()
)
self.res_conv5 = nn.Sequential(
nn.Conv2d(32, 32, 3, 1, 1),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.Conv2d(32, 32, 3, 1, 1),
nn.BatchNorm2d(32),
nn.ReLU()
)
self.conv = nn.Sequential(
nn.Conv2d(32, 3, 3, 1, 1),
)
def forward(self, input):
x = input
for i in range(6): # Won't change the number of parameters
'''
Different from Classification.
'''
x = torch.cat((input, x), 1)
x = self.conv0(x)
x = F.relu(self.res_conv1(x) + x)
x = F.relu(self.res_conv2(x) + x)
x = F.relu(self.res_conv3(x) + x)
x = F.relu(self.res_conv4(x) + x)
x = F.relu(self.res_conv5(x) + x)
x = self.conv(x)
x = x + input
return x
开始训练!
'''
Path of Dataset.
此处根据自己的电脑进行文件路径的赋予
'''
input_path = "../input/jrdr-deraining-dataset/JRDR/rain_data_train_Light/rain"
label_path = "../input/jrdr-deraining-dataset/JRDR/rain_data_train_Light/norain"
valid_input_path = '../input/jrdr-deraining-dataset/JRDR/rain_data_test_Light/rain/X2'
valid_label_path = '../input/jrdr-deraining-dataset/JRDR/rain_data_test_Light/norain'
'''
Check the device.
此处是确保当前环境能使用cpu
'''
device = 'cpu'
if torch.cuda.is_available():
device = 'cuda'
'''
Move the Network to the CUDA.
将网络通过GPU来训练,速度更快
'''
net = Net().to(device)
'''
Hyper Parameters.
TODO: fine-tuning.
设定基础参数:
'''
learning_rate = 1e-3
batch_size = 50
epoch = 100
patience = 30
stale = 0
best_valid_loss = 10000
'''
Prepare for plt.
准备列表,将每次训练后的模型进行测试,将测试的loss.item放入其中
'''
Loss_list = []
Valid_Loss_list = []
'''
Define optimizer and Loss Function.
定义损失函数的对象与优化器对象这部分下面笔记详说
'''
optimizer = optim.Adam(net.parameters(), lr=learning_rate)
loss_f = nn.MSELoss()
'''
Check the model.
此处是用来导入网络模型的:
先检查模型是否存在,然后加载
'''
if os.path.exists('./model.pth'):
print('Continue train with last model...')
net.load_state_dict(torch.load('./model.pth'))
else:
print("Restart...")
'''
Prepare DataLoaders.
Attension:
'pin_numbers=True' can accelorate CUDA computing.
导入已经建立好的数据集
batch_size 是指每次抓取的图片
shuffle 是指每次训练的抓取要不要随机
pin_memory 是指锁存内存,如果这个设置为True,就意味着Dataloader生成的tensor类型最开始是存放在锁业内存里面的,而存放在这里的tensor转义到cpu的显存里面就会更快
'''
dataset_train = MyTrainDataset(input_path, label_path)
dataset_valid = MyValidDataset(valid_input_path, valid_label_path)
train_loader = DataLoader(dataset_train, batch_size=batch_size, shuffle=True, pin_memory=True)
valid_loader = DataLoader(dataset_valid, batch_size=batch_size, shuffle=True, pin_memory=True)
'''
START Training ...
开始训练!
'''
for i in range(epoch):
# ---------------Train----------------
net.train()
train_losses = []
'''
tqdm is a toolkit for progress bar.
tqdm 就是一个用来观察训练进度的工具
这个循环就是训练网络模型了,大概过程就是将输入的图片与标签放入网络中,然后得到输出的结果,将结果和与输入对应的标签让如损失函数中,计算出这次训练中output与label之间的实值损失,然后将各个节点的梯度调节归零(很重要),再通过反向传播得到梯度值,再通过step()下降梯度值
'''
for batch in tqdm(train_loader):
inputs, labels = batch
outputs = net(inputs.to(device))
loss = loss_f(outputs, labels.to(device))
optimizer.zero_grad()
loss.backward()
'''
Avoid grad to be too BIG.
此处是防止梯度爆炸
'''
grad_norm = nn.utils.clip_grad_norm_(net.parameters(), max_norm=10)
optimizer.step()
'''
Attension:
We need set 'loss.item()' to turn Tensor into Numpy, or plt will not work.
将此处的loss是tensor类型,直接将这个数据导入列表中并且相加的话容易爆炸(超过上限),所以此处通过item()方法将tensor对象转换成一个数,方便相加
'''
train_losses.append(loss.item())
train_loss = sum(train_losses)
Loss_list.append(train_loss)
print(f"[ Train | {i + 1:03d}/{epoch:03d} ] loss = {train_loss:.5f}")
# -------------Validation-------------
'''
Validation is a step to ensure training process is working.
You can also exploit Validation to see if your net work is overfitting.
Firstly, you should set model.eval(), to ensure parameters not training.
'''
net.eval()
valid_losses = []
for batch in tqdm(valid_loader):
inputs, labels = batch
'''
Cancel gradient decent.
'''
with torch.no_grad():
outputs = net(inputs.to(device))
loss = loss_f(outputs, labels.to(device))
'''
将此处的loss是tensor类型,直接将这个数据导入列表中并且相加的话容易爆炸(超过上限),所以此处通过item()方法将tensor对象转换成一个数,方便相加
'''
valid_losses.append(loss.item())
valid_loss = sum(valid_losses)
Valid_Loss_list.append(valid_loss)
print(f"[ Valid | {i + 1:03d}/{epoch:03d} ] loss = {valid_loss:.5f}")
'''
Update Logs and save the best model.
Patience is also checked.
'''
if valid_loss < best_valid_loss:
print(
f"[ Valid | {i + 1:03d}/{epoch:03d} ] loss = {valid_loss:.5f} -> best")
else:
print(
f"[ Valid | {i + 1:03d}/{epoch:03d} ] loss = {valid_loss:.5f}")
'''
此处是将测试时得到的误差量与预计设计的误差量进行比较,如果当前网络通过测试得到的误差量已经小于了当前设定的最好的误差量,那么就将最好的误差量重新设定为这次得到的误差量,并把这次的网络参数保存下来,然后继续训练
'''
if valid_loss < best_valid_loss:
print(f'Best model found at epoch {i+1}, saving model')
torch.save(net.state_dict(), f'model_best.ckpt')
best_valid_loss = valid_loss
stale = 0
else:
'''
如果超过等待时间(次数)(此处即patience设定的次数),最好误差量都没有变化,就可以提前结束训练
'''
stale += 1
if stale > patience:
print(f'No improvement {patience} consecutive epochs, early stopping.')
break
'''
Use plt to draw Loss curves.
此处就是通过plt来绘制曲线图
'''
plt.figure(dpi=500)
x = range(epoch)
y = Loss_list
plt.plot(x, y, 'ro-', label='Train Loss')
plt.plot(range(epoch), Valid_Loss_list, 'bs-', label='Valid Loss')
plt.ylabel('Loss')
plt.xlabel('epochs')
plt.legend()
plt.show()
训练效果
去雨效果展示:



学习中总结的笔记:
- transforms.Compose
torchvision.transforms是pytorch中的图像预处理包,一般使用compose将多个步骤整合到一起
其中常见的函数有以下
Resize | 把给定的图片resize到given size |
Normalize | 用均值和标准差归一化张量图像 |
ToTensor | convert a PIL image to tensor (H*W*C) in range [0,255] to a torch.Tensor(C*H*W) in the range [0.0,1.0] |
CenterCrop | 在图片的中间区域进行裁剪 |
RandomCrop | 在一个随机的位置进行裁剪 |
FiceCrop | 把图像裁剪为四个角和一个中心 |
RandomResizedCrop | 将PIL图像裁剪成任意大小和纵横比 |
ToPILImage | 将一个tensor类型转换成PIL类型 |
RandomHorizontalFlip | 以0.5的概率水平翻转给定的PIL图像 |
RandomVerticalFlip | 以0.5的概率竖直翻转给定的PIL图像 |
Grayscale | 将图像转换为灰度图像 |
RandomGrayscale | 将图像以一定的概率转换为灰度图像 |
ColorJitter | 随机改变图像的亮度对比度和饱和度 |
- label_image = Image.open(label_image_path).convert(‘RGB’)
其中的convert是将数据集中的图像转换成三通道。虽然数据集里面都是彩色图片,都应该是三通道的,但是有些图片有着肉眼看不到的其他通道,而这些通道是我们不需要的,所以需要去除
- torch.optim.Adam
该算法的来源:https://arxiv.org/pdf/1412.6980.pdf
该算法具体提复杂的,此处只讲讲如何使用:
在pytorch中,adam的源码为:
import math
from .optimizer import Optimizer
class Adam(Optimizer):
def __init__(self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8,weight_decay=0):
defaults = dict(lr=lr, betas=betas, eps=eps,weight_decay=weight_decay)
super(Adam, self).__init__(params, defaults)
def step(self, closure=None):
loss = None
if closure is not None:
loss = closure()
for group in self.param_groups:
for p in group['params']:
if p.grad is None:
continue
grad = p.grad.data
state = self.state[p]
# State initialization
if len(state) == 0:
state['step'] = 0
# Exponential moving average of gradient values
state['exp_avg'] = grad.new().resize_as_(grad).zero_()
# Exponential moving average of squared gradient values
state['exp_avg_sq'] = grad.new().resize_as_(grad).zero_()
exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']
beta1, beta2 = group['betas']
state['step'] += 1
if group['weight_decay'] != 0:
grad = grad.add(group['weight_decay'], p.data)
# Decay the first and second moment running average coefficient
exp_avg.mul_(beta1).add_(1 - beta1, grad)
exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad)
denom = exp_avg_sq.sqrt().add_(group['eps'])
bias_correction1 = 1 - beta1 ** state['step']
bias_correction2 = 1 - beta2 ** state['step']
step_size = group['lr'] * math.sqrt(bias_correction2) / bias_correction1
p.data.addcdiv_(-step_size, exp_avg, denom)
return loss
class torch.optim.Adam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0)
其中参数的意义如下:
params(iterable): 可用于迭代优化的参数或者定义参数组的dicts
lr (float, optional) : 学习率(默认: 1e-3)
betas (Tuple[float, float], optional): 用于计算梯度的平均和平方的系数(默认: (0.9, 0.999))
eps (float, optional): 为了提高数值稳定性而添加到分母的一个项(默认: 1e-8)
weight_decay (float, optional): 权重衰减(如L2惩罚)(默认: 0)
- tqdm
简单来说,这就是个快速的、易扩展的进度条提示模块,他的官方地址为:https://tqdm.github.io/。
可以直接pip(此处为了下载速读更快,便直接从清华源下载)
pip install tqdm -i https://pypi.tuna.tsinghua.edu.cn/simple/
tqdm的主要参数:
iterable=None: 可迭代对象。如上一节中的range(20)
desc=None: 传入str类型,作为进度条标题。如上一节中的desc=’It’s a test’
total=None: 预期的迭代次数。一般不填,默认为iterable的长度。
leave=True: 迭代结束时,是否保留最终的进度条。默认保留。
file=None: 输出指向位置,默认是终端,一般不需要设置。
ncols=None: 可以自定义进度条的总长度
unit: 描述处理项目的文字,默认’it’,即100it/s;处理照片设置为’img’,则为100img/s
postfix: 以字典形式传入详细信息,将显示在进度条中。例如postfix={‘value’: 520}
unit_scale: 自动根据国际标准进行项目处理速度单位的换算,例如100000it/s换算为100kit/s
(此处笔记源自该篇文章:https://blog.csdn.net/winter2121/article/details/111356587)
代码中实现的方法为:将训练数据集放入tqdm()之中,这个方法会返回一个可迭代的对象,这个对象就可以控制进度条的变化
我对此的理解是,他返回回来的对象其实还是我们原本的那个对象,只是现在这个对象增加了一些东西,可以来控制进度条的变化
- optimizer.zero_grad(), loss.backward(), optimizer.step()
这三步算是训练网络的基本三步
第一个optimizer.zero_grad():
optimizer.zero_grad()函数会遍历模型的所有参数,通过p.grad.detach()方法截断反向传播的梯度流,再通过p.grad.zero()函数将每个参数的梯度值设为0,即上一次的梯度记录被清空。如果不将梯度清零的话那么第二次训练时的梯度将会和上一次我们从数据集中抓取的数据有关,因此该函数要写在反向传播和梯度下降之前。
第二个loss.backward():
这个函数即是反向传播,tensor类型的对象里面包含有梯度,当我们进行这个方法时,他就会反向跟踪我们这个tensor对象所进行的一切计算,然后计算出他的梯度值,再将tensor里面的梯度更改。
第三个optimizer.step():
就是根据前面通过loss.backward()函数得到的梯度值,来进行优化网络,更新参数的值来降低梯度。但是要注意,optimizer.step()只能在已有梯度的前提下进行优化,所以loss.backward()必须在这个函数前面
(该部分参考于:https://blog.csdn.net/PanYHHH/article/details/107361827)