码迷,mamicode.com
首页 > 其他好文 > 详细

markdown-demo

时间:2018-08-19 20:34:48      阅读:207      评论:0      收藏:0      [点我收藏+]

标签:cuda   __init__   mbed   pytho   6.4   seq   ble   strong   embedding   

jfdslan

SST-1 sst-2
1D-CNN 36.4 81.0
kimCNN 46.7 86.3
1D-2D-CNN 49.8 87.5

nisdhkfd

class MYMODEL_V11(BasicModule):
    def __init__(self, opt):
        super(MYMODEL_V11, self).__init__()
        self.opt = opt
        self.model_name = ‘mymodel_v11‘
        self.batch_size = opt.batch_size
        self.hidden_dim = opt.hidden_dim
        self.num_layers = opt.lstm_layers
        self.mean = opt.lstm_mean
        self.vocab_size = opt.vocab_size
        self.embedding_dim = opt.embedding_dim
        self.label_size = opt.label_size
        self.in_channel = 1
        self.kernel_nums = opt.clstm_kernel_nums
        self.kernel_sizes = opt.clstm_kernel_sizes
        self.ks2 = opt.kernel_sizes
        self.kn2 = opt.kernel_nums
        self.use_gpu = torch.cuda.is_available()
        self.max_seq_len = opt.max_seq_len
        # self.embedding = nn.Embedding(self.vocab_size + 2, self.embedding_dim, padding_idx=self.vocab_size + 1)
        self.embedding = nn.Embedding(self.vocab_size, self.embedding_dim, padding_idx=self.vocab_size-1, _weight=opt.embeddings)        
        # self.embedding.weight = nn.Parameter(opt.embeddings)
        self.convs = nn.ModuleList([nn.Conv1d(
            in_channels = self.embedding_dim,
            out_channels = num * self.embedding_dim,
            kernel_size = size,
            padding = size // 2,
            groups = self.embedding_dim)
            for size, num in zip(self.kernel_sizes, self.kernel_nums)])

        self.convs1 = nn.ModuleList(
            [nn.Conv2d(self.kernel_nums[0]*len(self.kernel_sizes), num, (size, self.embedding_dim), padding= (size // 2, 0)) for size, num in zip(self.ks2, self.kn2)])

        # groups=in_channels:输入channels之间不求和
        # LSTM
        # self.lstm = nn.LSTM(opt.embedding_dim, self.hidden_dim, num_layers=self.num_layers, dropout=opt.keep_dropout)
        self.bilstm = nn.LSTM(opt.embedding_dim, opt.hidden_dim // 2, num_layers=self.num_layers, dropout=opt.keep_dropout,
                              bidirectional=True)
        # self.hidden = self.init_hidden(self.num_layers, opt.batch_size)
        # linear
        self.hidden2label1 = nn.Linear(self.hidden_dim, self.hidden_dim // 2)
        self.hidden2label2 = nn.Linear(self.hidden_dim // 2, self.label_size)
        # dropout
        self.dropout = nn.Dropout(opt.keep_dropout)
        self.bn1 = nn.BatchNorm1d(self.hidden_dim // 2) # self.hidden_dim // 2
        self.bn2 = nn.BatchNorm1d(opt.label_size)

rburb

boewhrkjhb

markdown-demo

标签:cuda   __init__   mbed   pytho   6.4   seq   ble   strong   embedding   

原文地址:https://www.cnblogs.com/plusczh/p/9502420.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!