pytorch实现focal loss的两种方式小结
作者:WYXHAHAHA123 时间:2023-07-02 14:43:22
我就废话不多说了,直接上代码吧!
import torch
import torch.nn.functional as F
import numpy as np
from torch.autograd import Variable
'''
pytorch实现focal loss的两种方式(现在讨论的是基于分割任务)
在计算损失函数的过程中考虑到类别不平衡的问题,假设加上背景类别共有6个类别
'''
def compute_class_weights(histogram):
classWeights = np.ones(6, dtype=np.float32)
normHist = histogram / np.sum(histogram)
for i in range(6):
classWeights[i] = 1 / (np.log(1.10 + normHist[i]))
return classWeights
def focal_loss_my(input,target):
'''
:param input: shape [batch_size,num_classes,H,W] 仅仅经过卷积操作后的输出,并没有经过任何激活函数的作用
:param target: shape [batch_size,H,W]
:return:
'''
n, c, h, w = input.size()
target = target.long()
input = input.transpose(1, 2).transpose(2, 3).contiguous().view(-1, c)
target = target.contiguous().view(-1)
number_0 = torch.sum(target == 0).item()
number_1 = torch.sum(target == 1).item()
number_2 = torch.sum(target == 2).item()
number_3 = torch.sum(target == 3).item()
number_4 = torch.sum(target == 4).item()
number_5 = torch.sum(target == 5).item()
frequency = torch.tensor((number_0, number_1, number_2, number_3, number_4, number_5), dtype=torch.float32)
frequency = frequency.numpy()
classWeights = compute_class_weights(frequency)
'''
根据当前给出的ground truth label计算出每个类别所占据的权重
'''
# weights=torch.from_numpy(classWeights).float().cuda()
weights = torch.from_numpy(classWeights).float()
focal_frequency = F.nll_loss(F.softmax(input, dim=1), target, reduction='none')
'''
上面一篇博文讲过
F.nll_loss(torch.log(F.softmax(inputs, dim=1),target)的函数功能与F.cross_entropy相同
可见F.nll_loss中实现了对于target的one-hot encoding编码功能,将其编码成与input shape相同的tensor
然后与前面那一项(即F.nll_loss输入的第一项)进行 element-wise production
相当于取出了 log(p_gt)即当前样本点被分类为正确类别的概率
现在去掉取log的操作,相当于 focal_frequency shape [num_samples]
即取出ground truth类别的概率数值,并取了负号
'''
focal_frequency += 1.0#shape [num_samples] 1-P(gt_classes)
focal_frequency = torch.pow(focal_frequency, 2) # torch.Size([75])
focal_frequency = focal_frequency.repeat(c, 1)
'''
进行repeat操作后,focal_frequency shape [num_classes,num_samples]
'''
focal_frequency = focal_frequency.transpose(1, 0)
loss = F.nll_loss(focal_frequency * (torch.log(F.softmax(input, dim=1))), target, weight=None,
reduction='elementwise_mean')
return loss
def focal_loss_zhihu(input, target):
'''
:param input: 使用知乎上面大神给出的方案 https://zhuanlan.zhihu.com/p/28527749
:param target:
:return:
'''
n, c, h, w = input.size()
target = target.long()
inputs = input.transpose(1, 2).transpose(2, 3).contiguous().view(-1, c)
target = target.contiguous().view(-1)
N = inputs.size(0)
C = inputs.size(1)
number_0 = torch.sum(target == 0).item()
number_1 = torch.sum(target == 1).item()
number_2 = torch.sum(target == 2).item()
number_3 = torch.sum(target == 3).item()
number_4 = torch.sum(target == 4).item()
number_5 = torch.sum(target == 5).item()
frequency = torch.tensor((number_0, number_1, number_2, number_3, number_4, number_5), dtype=torch.float32)
frequency = frequency.numpy()
classWeights = compute_class_weights(frequency)
weights = torch.from_numpy(classWeights).float()
weights=weights[target.view(-1)]#这行代码非常重要
gamma = 2
P = F.softmax(inputs, dim=1)#shape [num_samples,num_classes]
class_mask = inputs.data.new(N, C).fill_(0)
class_mask = Variable(class_mask)
ids = target.view(-1, 1)
class_mask.scatter_(1, ids.data, 1.)#shape [num_samples,num_classes] one-hot encoding
probs = (P * class_mask).sum(1).view(-1, 1)#shape [num_samples,]
log_p = probs.log()
print('in calculating batch_loss',weights.shape,probs.shape,log_p.shape)
# batch_loss = -weights * (torch.pow((1 - probs), gamma)) * log_p
batch_loss = -(torch.pow((1 - probs), gamma)) * log_p
print(batch_loss.shape)
loss = batch_loss.mean()
return loss
if __name__=='__main__':
pred=torch.rand((2,6,5,5))
y=torch.from_numpy(np.random.randint(0,6,(2,5,5)))
loss1=focal_loss_my(pred,y)
loss2=focal_loss_zhihu(pred,y)
print('loss1',loss1)
print('loss2', loss2)
'''
in calculating batch_loss torch.Size([50]) torch.Size([50, 1]) torch.Size([50, 1])
torch.Size([50, 1])
loss1 tensor(1.3166)
loss2 tensor(1.3166)
'''
来源:https://blog.csdn.net/WYXHAHAHA123/article/details/88343945
标签:pytorch,focal,loss
0
投稿
猜你喜欢
浅析is_writable的php实现
2023-09-09 01:41:05
Mysql排序和分页(order by&limit)及存在的坑
2024-01-20 04:55:28
Python 有可能删除 GIL 吗?
2023-02-12 15:52:21
在Python的Django框架中为代码添加注释的方法
2023-09-25 07:24:21
详解python opencv、scikit-image和PIL图像处理库比较
2021-11-10 02:24:13
C#连接db2数据库的实现方法
2024-01-19 07:00:51
python实现数据写入excel表格
2023-07-04 00:39:41
重新发现HTML表格
2009-12-02 09:47:00
location.href 在IE6中不跳转的解决方法与推荐使用代码
2024-04-19 10:13:38
SQL Server索引超出了数组界限的解决方案
2024-01-12 19:14:41
Javascript学习笔记之 函数篇(二) : this 的工作机制
2024-05-11 10:23:57
使用Python遍历文件夹实现查找指定文件夹
2021-01-19 09:23:06
八个有用的WordPress的SQL语句
2009-01-12 18:54:00
mysql 8.0.15 安装配置图文教程
2024-01-26 02:22:41
网页设计趋势之:”勾引”用户的按钮
2009-02-17 12:09:00
Python 异步之在 Asyncio中如何运行阻塞任务详解
2023-06-10 04:30:41
vue中如何实现变量和字符串拼接
2024-04-30 10:21:22
JS 排序输出实现table行号自增前端动态生成的tr
2024-06-16 05:07:50
Pyside2中嵌入Matplotlib的绘图的实现
2021-09-15 22:34:03
Python中让MySQL查询结果返回字典类型的方法
2024-01-25 04:37:33