Pytorch教程

PyTorch单词嵌入

PyTorch单词嵌入详细操作教程
在本章中,我们将了解单词嵌入模型—word2vec。Word2vec模型用于在相关模型组的帮助下生成单词嵌入。Word2vec模型使用纯C代码实现,并且手动计算梯度。
PyTorch中word2vec模型的实现在以下步骤中解释 -

第1步

在以下库中实现单词嵌入,如下所述 -
# Filename : example.py
# Copyright : 2020 By Lidihuo
# Author by : www.lidihuo.com
# Date : 2020-08-23
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F

第2步

使用名为word2vec的类实现单词嵌入的Skip Gram模型。它包括:emb_size,emb_dimension,u_embedding,v_embedding类型的属性。
# Filename : example.py
# Copyright : 2020 By Lidihuo
# Author by : www.lidihuo.com
# Date : 2020-08-23
class SkipGramModel(nn.Module):
   def __init__(self, emb_size, emb_dimension):
      super(SkipGramModel, self).__init__()
      self.emb_size = emb_size
      self.emb_dimension = emb_dimension
      self.u_embeddings = nn.Embedding(emb_size, emb_dimension, sparse=True)
      self.v_embeddings = nn.Embedding(emb_size, emb_dimension, sparse = True)
      self.init_emb()
   def init_emb(self):
      initrange = 0.5 / self.emb_dimension
      self.u_embeddings.weight.data.uniform_(-initrange, initrange)
      self.v_embeddings.weight.data.uniform_(-0, 0)
   def forward(self, pos_u, pos_v, neg_v):
      emb_u = self.u_embeddings(pos_u)
      emb_v = self.v_embeddings(pos_v)
      score = torch.mul(emb_u, emb_v).squeeze()
      score = torch.sum(score, dim = 1)
      score = F.logsigmoid(score)
      neg_emb_v = self.v_embeddings(neg_v)
      neg_score = torch.bmm(neg_emb_v, emb_u.unsqueeze(2)).squeeze()
      neg_score = F.logsigmoid(-1 * neg_score)
      return -1 * (torch.sum(score)+torch.sum(neg_score))
   def save_embedding(self, id2word, file_name, use_cuda):
      if use_cuda:
         embedding = self.u_embeddings.weight.cpu().data.numpy()
      else:
         embedding = self.u_embeddings.weight.data.numpy()
      fout = open(file_name, 'w')
      fout.write('%d %d' % (len(id2word), self.emb_dimension))
      for wid, w in id2word.items():
         e = embedding[wid]
         e = ' '.join(map(lambda x: str(x), e))
         fout.write('%s %s' % (w, e))
def test():
   model = SkipGramModel(100, 100)
   id2word = dict()
   for i in range(100):
      id2word[i] = str(i)
   model.save_embedding(id2word)

第3步

实现main方法,以正确的方式显示单词嵌入模型。
# Filename : example.py
# Copyright : 2020 By Lidihuo
# Author by : www.lidihuo.com
# Date : 2020-08-23
if __name__ == '__main__':
   test()
昵称: 邮箱:
Copyright © 2022 立地货 All Rights Reserved.
备案号:京ICP备14037608号-4