Pytorch pytorch中的LSTM模型

2021-09-11 19:16:31 字數 3027 閱讀 5477

pytorch中lstm的公式表示為:

pytorch中lstm的定義如下:

輸入資料格式: 

input(seq_len, batch, input_size) 

h0(num_layers * num_directions, batch, hidden_size) 

c0(num_layers * num_directions, batch, hidden_size)

輸出資料格式: 

output(seq_len, batch, hidden_size * num_directions) 

hn(num_layers * num_directions, batch, hidden_size) 

cn(num_layers * num_directions, batch, hidden_size)

import torch

import gensim

torch.manual_seed(2)

datas=[('你 叫 什麼 名字 ?','n v n n f'),('今天 天氣 怎麼樣 ?','n n adj f'),]

words=[ data[0].split() for data in datas]

tags=[ data[1].split() for data in datas]

id2word=gensim.corpora.dictionary(words)

word2id=id2word.token2id

id2tag=gensim.corpora.dictionary(tags)

tag2id=id2tag.token2id

def sen2id(inputs):

return [word2id[word] for word in inputs]

def tags2id(inputs):

return [tag2id[word] for word in inputs]

# print(sen2id('你 叫 什麼 名字'.split()))

def formart_input(inputs):

return torch.autograd.variable(torch.longtensor(sen2id(inputs)))

def formart_tag(inputs):

return torch.autograd.variable(torch.longtensor(tags2id(inputs)),)

class lstmtagger(torch.nn.module):

def __init__(self,embedding_dim,hidden_dim,***cb_size,target_size):

super(lstmtagger,self).__init__()

self.embedding_dim=embedding_dim

self.hidden_dim=hidden_dim

self.***cb_size=***cb_size

self.target_size=target_size

self.lstm=torch.nn.lstm(self.embedding_dim,self.hidden_dim)

self.log_softmax=torch.nn.logsoftmax()

self.embedding=torch.nn.embedding(self.***cb_size,self.embedding_dim)

self.hidden=(torch.autograd.variable(torch.zeros(1,1,self.hidden_dim)),torch.autograd.variable(torch.zeros(1,1,self.hidden_dim)))

self.out2tag=torch.nn.linear(self.hidden_dim,self.target_size)

def forward(self,inputs):

input=self.embedding((inputs))

out,self.hidden=self.lstm(input.view(-1,1,self.embedding_dim),self.hidden)

tags=self.log_softmax(self.out2tag(out.view(-1,self.hidden_dim)))

return tags

model=lstmtagger(3,3,len(word2id),len(tag2id))

loss_function=torch.nn.nllloss()

optimizer=torch.optim.sgd(model.parameters(),lr=0.1)

for _ in range(100):

model.zero_grad()

input=formart_input('你 叫 什麼 名字'.split())

tags=formart_tag('n n adj f'.split())

out=model(input)

loss=loss_function(out,tags)

loss.backward(retain_variables=true)

optimizer.step()

print(loss.data[0])

input=formart_input('你 叫 什麼'.split())

out=model(input)

out=torch.max(out,1)[1]

print([id2tag[out.data[i]] for i in range(0,out.size()[0])])

**:

PyTorch PyTorch高階教程一

前面介紹了pytorch的一些基本用法,從這一節開始介紹pytorch在深度學習中的應用。在開始介紹之前,首先熟悉一下常用的概念和層。class torch.nn.module 舉例 import torch.nn as nn import torch.nn.functional as f clas...

PyTorch PyTorch高階教程三

前面介紹了使用pytorch構造cnn網路,這一節介紹點高階的東西lstm。以及我之前的一篇中文翻譯部落格 class torch.nn.lstm args,kwargs class rnn nn.module def init self,input size,hidden size,num lay...

PyTorch PyTorch入門教程五

還是直接看 是如何寫的。從numpy中建立輸入與輸出。import torch import torch.nn as nn import numpy as np import matplotlib.pyplot as plt from torch.autograd import variable h...