标签:some als one set highlight for back weight div
1. requires_grad = False
Set all parameters in the current model frozen:
for p in self.parameters(): p.requires_grad = False
Filter some specific layers by name to be frozen:
for n, m in self.named_modules(): if ‘stc‘ not in n: for p in m.parameters(): p.requires_grad = False else: for p in m.parameters(): p.requires_grad = True
2. Filter out unfrozen parameters, pass it to the optimizer
if args.freeze_backbone_update: optimizer = torch.optim.SGD(filter(lambda para: para.requires_grad, org_model.parameters()), args.lr, momentum=args.momentum, weight_decay=args.weight_decay) else: optimizer = torch.optim.SGD(org_model.parameters(), args.lr, momentum=args.momentum, weight_decay=args.weight_decay)
Freeze partial parameters while training
标签:some als one set highlight for back weight div
原文地址:https://www.cnblogs.com/hizhaolei/p/10624196.html