码迷,mamicode.com
首页 > 其他好文 > 详细

【caffe Layer】代码中文注释

时间:2017-06-30 17:31:03      阅读:270      评论:0      收藏:0      [点我收藏+]

标签:编译   ice   memory   drop   tput   gradient   阶段   pre   缩放   

src/caffe/proto/caffe.proto 中LayerParameter部分

  1 // NOTE
  2 // Update the next available ID when you add a new LayerParameter field.
  3 // 如果增加一个新的LayerParameter域,需要更新下一个可用的ID
  4 // LayerParameter next available layer-specific ID: 147 (last added: recurrent_param)
  5 message LayerParameter {
  6   optional string name = 1; // the layer name 名称
  7   optional string type = 2; // the layer type 类型
  8   repeated string bottom = 3; // the name of each bottom blob 输入的Bottom Blob的名称
  9   repeated string top = 4; // the name of each top blob 输出的Top Blob名称
 10 
 11   // The train / test phase for computation.当前阶段TRAIN或TEST
 12   optional Phase phase = 10;
 13 
 14   // The amount of weight to assign each top blob in the objective.
 15   // Each layer assigns a default value, usually of either 0 or 1,
 16   // to each top blob.
 17   // 为每个输出Top Blob分配对损失函数的权重,每个Layer都有默认值,0表示不参与计算,1表示参与损失函数计算
 18   repeated float loss_weight = 5;
 19 
 20   // Specifies training parameters (multipliers on global learning constants,
 21   // and the name and other settings used for weight sharing).
 22   // 指定训练参数(例如相对全局学习常熟的缩放因子,以及用于权值共享的名称或其他设置)
 23   repeated ParamSpec param = 6;
 24 
 25   // The blobs containing the numeric parameters of the layer.
 26   // 承载该曾数值参数的Blob
 27   repeated BlobProto blobs = 7;
 28 
 29   // Specifies whether to backpropagate to each bottom. If unspecified,
 30   // Caffe will automatically infer whether each input needs backpropagation
 31   // to compute parameter gradients. If set to true for some inputs,
 32   // backpropagation to those inputs is forced; if set false for some inputs,
 33   // backpropagation to those inputs is skipped.
 34   // 是否对Bottom Blob进行反向传播过程。该字段维度应与Bottom Blob个数一致。
 35   // The size must be either 0 or equal to the number of bottoms.
 36   repeated bool propagate_down = 11;
 37 
 38   // Rules controlling whether and when a layer is included in the network,
 39   // based on the current NetState.  You may specify a non-zero number of rules
 40   // to include OR exclude, but not both.  If no include or exclude rules are
 41   // specified, the layer is always included.  If the current NetState meets
 42   // ANY (i.e., one or more) of the specified rules, the layer is
 43   // included/excluded.
 44   // 控制某个层在某个时刻是否包含在网络中(基于当前的NetState)
 45   // 可以为include或exclude指定非零值(不能同时)
 46   // 如果没有规则,该层一直包含在网络中
 47   // 如果当前的NetState满足一定条件,那么该层被包含或被排斥
 48   repeated NetStateRule include = 8;
 49   repeated NetStateRule exclude = 9;
 50 
 51   // Parameters for data pre-processing. 数据预处理参数
 52   optional TransformationParameter transform_param = 100;
 53 
 54   // Parameters shared by loss layers. 所有损失层共享的参数
 55   optional LossParameter loss_param = 101;
 56 
 57   // Layer type-specific parameters.特定类型层参数
 58   // 注意:一些层实现时可能有多于一种计算引擎,这些层通过选择引擎类型和引擎参数来实现。
 59   // 默认引擎是在编译阶段由引擎开关设置的
 60   // Note: certain layers may have more than one computational engine
 61   // for their implementation. These layers include an Engine type and
 62   // engine parameter for selecting the implementation.
 63   // The default for the engine is set by the ENGINE switch at compile-time.
 64   optional AccuracyParameter accuracy_param = 102;
 65   optional ArgMaxParameter argmax_param = 103;
 66   optional BatchNormParameter batch_norm_param = 139;
 67   optional BiasParameter bias_param = 141;
 68   optional ConcatParameter concat_param = 104;
 69   optional ContrastiveLossParameter contrastive_loss_param = 105;
 70   optional ConvolutionParameter convolution_param = 106;
 71   optional CropParameter crop_param = 144;
 72   optional DataParameter data_param = 107;
 73   optional DropoutParameter dropout_param = 108;
 74   optional DummyDataParameter dummy_data_param = 109;
 75   optional EltwiseParameter eltwise_param = 110;
 76   optional ELUParameter elu_param = 140;
 77   optional EmbedParameter embed_param = 137;
 78   optional ExpParameter exp_param = 111;
 79   optional FlattenParameter flatten_param = 135;
 80   optional HDF5DataParameter hdf5_data_param = 112;
 81   optional HDF5OutputParameter hdf5_output_param = 113;
 82   optional HingeLossParameter hinge_loss_param = 114;
 83   optional ImageDataParameter image_data_param = 115;
 84   optional InfogainLossParameter infogain_loss_param = 116;
 85   optional InnerProductParameter inner_product_param = 117;
 86   optional InputParameter input_param = 143;
 87   optional LogParameter log_param = 134;
 88   optional LRNParameter lrn_param = 118;
 89   optional MemoryDataParameter memory_data_param = 119;
 90   optional MVNParameter mvn_param = 120;
 91   optional ParameterParameter parameter_param = 145;
 92   optional PoolingParameter pooling_param = 121;
 93   optional PowerParameter power_param = 122;
 94   optional PReLUParameter prelu_param = 131;
 95   optional PythonParameter python_param = 130;
 96   optional RecurrentParameter recurrent_param = 146;
 97   optional ReductionParameter reduction_param = 136;
 98   optional ReLUParameter relu_param = 123;
 99   optional ReshapeParameter reshape_param = 133;
100   optional ScaleParameter scale_param = 142;
101   optional SigmoidParameter sigmoid_param = 124;
102   optional SoftmaxParameter softmax_param = 125;
103   optional SPPParameter spp_param = 132;
104   optional SliceParameter slice_param = 126;
105   optional TanHParameter tanh_param = 127;
106   optional ThresholdParameter threshold_param = 128;
107   optional TileParameter tile_param = 138;
108   optional WindowDataParameter window_data_param = 129;
109 }

 

 

 

 

 

摘抄参考赵永科《深度学习 21天实战caffe》

 

【caffe Layer】代码中文注释

标签:编译   ice   memory   drop   tput   gradient   阶段   pre   缩放   

原文地址:http://www.cnblogs.com/xiangfeidemengzhu/p/7099160.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!