码迷,mamicode.com
首页 > 其他好文 > 详细

timm框架网络的default_cfg全解析.

时间:2020-05-17 00:49:49      阅读:165      评论:0      收藏:0      [点我收藏+]

标签:预处理   nat   div   center   htm   图片   mobile   算法   ati   

通过自己查资料和debug得到的结论,目前正确概率百分之99, 可以让timm框架用的清清楚楚.欢迎拍砖
技术图片
2020-05-16,23点41



解析timm框架里面的pre_model.default_cfg:



ReLU6-157           [-1, 1280, 7, 7]               0
AdaptiveAvgPool2d-158           [-1, 1280, 1, 1]               0


{url: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-wei
ghts/mobilenetv2_100_ra-b33bc2c4.pth, num_classes: 1000, input_size: (3, 224, 224), pool_size: (7, 7), crop_pct: 0.875, interpolation: bicubic, mean: (0.485, 0.456, 0.406),
 std: (0.229, 0.224, 0.225), first_conv: conv_stem, classifier: classifier}

pool_size:从这里看就是最后的一层adaptiveavgpool.这个算法是torch里面自带的.
他的作用是,你指定输出的shape,比如这里面是1,1 那么他自动会机选pool_size为多大合适.这里面显然7*7才能
变成1*1







下面测试结果也说明了这个问题.答案接笑了.


{url: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_effi
cientnet_b7_ra-6c08e654.pth, num_classes: 1000, input_size: (3, 600, 600), pool_size: (19, 
19), crop_pct: 0.949, interpolation: bicubic, mean: (0.485, 0.456, 0.406), std: (0.229,
 0.224, 0.225), first_conv: conv_stem, classifier: classifier}





     BatchNorm2d-767         [-1, 2560, 19, 19]           5,120
           Swish-768         [-1, 2560, 19, 19]               0
AdaptiveAvgPool2d-769           [-1, 2560, 1, 1]               0
SelectAdaptivePool2d-770           [-1, 2560, 1, 1]               0
          Linear-771                 [-1, 1000]       2,561,000


完美的预处理:!!!!!!!!!!!!!!! 同时也解答了crop_pct的使用,
答案就是:先把网络方所到scale_size = int(math.floor(img_size / crop_pct)),再transforms.CenterCrop(img_size),
然后图片再经过训练.

# 需要导入模块: from torchvision import transforms [as 别名]
# 或者: from torchvision.transforms import CenterCrop [as 别名]
def get_transforms_eval(model_name, img_size=224, crop_pct=None):
    crop_pct = crop_pct or DEFAULT_CROP_PCT
    if dpn in model_name:
        if crop_pct is None:
            # Use default 87.5% crop for models native img_size
            # but use 100% crop for larger than native as it
            # improves test time results across all models.
            if img_size == 224:
                scale_size = int(math.floor(img_size / DEFAULT_CROP_PCT))
            else:
                scale_size = img_size
        else:
            scale_size = int(math.floor(img_size / crop_pct))
        normalize = transforms.Normalize(
            mean=[124 / 255, 117 / 255, 104 / 255],
            std=[1 / (.0167 * 255)] * 3)
    elif inception in model_name:
        scale_size = int(math.floor(img_size / crop_pct))
        normalize = LeNormalize()
    else:
        scale_size = int(math.floor(img_size / crop_pct))
        normalize = transforms.Normalize(
            mean=[0.485, 0.456, 0.406],
            std=[0.229, 0.224, 0.225])

    return transforms.Compose([
        transforms.Scale(scale_size, Image.BICUBIC),  #interpolation 这个参数写这里,放入scale里面.
        transforms.CenterCrop(img_size),
        transforms.ToTensor(),
        normalize]) 
View Code

 

timm框架网络的default_cfg全解析.

标签:预处理   nat   div   center   htm   图片   mobile   算法   ati   

原文地址:https://www.cnblogs.com/zhangbo2008/p/12902992.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!