码迷,mamicode.com
首页 > 其他好文 > 详细

Freeze partial parameters while training

时间:2019-03-29 22:17:03      阅读:192      评论:0      收藏:0      [点我收藏+]

标签:some   als   one   set   highlight   for   back   weight   div   

1. requires_grad = False

Set all parameters in the current model frozen:

for p in self.parameters():
    p.requires_grad = False

 

Filter some specific layers by name to be frozen:

for n, m in self.named_modules():
    if ‘stc‘ not in n:
        for p in m.parameters():
            p.requires_grad = False
    else:
        for p in m.parameters():
            p.requires_grad = True

 

2. Filter out unfrozen parameters, pass it to the optimizer

if args.freeze_backbone_update:
    optimizer = torch.optim.SGD(filter(lambda para: para.requires_grad, org_model.parameters()),
                                args.lr,
                                momentum=args.momentum,
                                weight_decay=args.weight_decay)
else:
    optimizer = torch.optim.SGD(org_model.parameters(),
                                args.lr,
                                momentum=args.momentum,
                                weight_decay=args.weight_decay)

 

Freeze partial parameters while training

标签:some   als   one   set   highlight   for   back   weight   div   

原文地址:https://www.cnblogs.com/hizhaolei/p/10624196.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!