用pytorch構建神經網路

2021-09-27 02:42:28 字數 3000 閱讀 4935

在神經網路模型中之前,要對資料進行一系列的預處理,如果是型別變數,可使用one-hot編碼;對於數值型別,可進行標準化,讓其屬性值在0左右波動

import torch

import numpy as np

import pandas as pd

from torch.autograd import variable

#輸入特徵,最後一列為分類

feature = [[200,6975,1],

[800,56797,0],

[400,45875,1],

[200,59245,0],

[300,469372,1],

[500,32467,1],

[700,183481,1]

]feature = pd.dataframe(feature)

#標準化

for each in feature.columns[:-1]:

mean, std = feature[each].mean(), feature[each].std()

feature.loc[:, each] = (feature[each] - mean)/std

#手動編寫

input_size = 2

hidden_size = 28

outpu_size = 1

batch_size = 4

w1 = variable(torch.randn([input_size,hidden_size]),requires_grad = true)

b1 = variable(torch.randn(hidden_size),requires_grad = true)

w2 = variable(torch.randn([hidden_size,outpu_size]),requires_grad = true)

def neu(x):

hidden = x.mm(w1)+b1.expand(x.shape[0],hidden_size)

hidden = torch.sigmoid(hidden)

output = hidden.mm(w2)

return output

def cost(x,y):

error = torch.mean((x-y)**2)

return error

def zero_grad():

if w1.grad is not none and b1.grad is not none and w2.grad is not none:

w1.grad.data.zero_()

w2.grad.data.zero_()

b1.grad.data.zero_()

def optimizer(lr):

w1.data.add_(-lr*w1.grad.data)

w2.data.add_(-lr*w2.grad.data)

b1.data.add_(-lr*b1.grad.data)

for start in range(0,len(feature.index),batch_size):

end = start+batch_size if start+batch_sizex = variable(torch.floattensor(feature.iloc[start:end,:-1].values))

y = variable(torch.floattensor(feature.iloc[start:end,2].values))

pre = neu(x)

batch_loss =

loss = cost(pre,y)

zero_grad()

loss.backward()

optimizer(0.0001)

print(batch_loss)

new_x = torch.floattensor([[0.324,-0.34431]])

new_y = neu(new_x)

print(new_y)

input_size = 2

hidden_size = 28

output_size = 1

batch_size = 4

neu = torch.nn.sequential(

torch.nn.linear(input_size, hidden_size),

torch.nn.sigmoid(),

torch.nn.linear(hidden_size, output_size),

)cost = torch.nn.mseloss()

optimizer = torch.optim.sgd(neu.parameters(), lr = 0.01)

losses =

for i in range(1000):

batch_loss =

for start in range(0, len(feature.index), batch_size):

end = start + batch_size if start + batch_size < len(feature.index) else len(feature.index)

x = variable(torch.floattensor(feature.iloc[start:end,:-1].values))

y = variable(torch.floattensor(feature.iloc[start:end,2].values))

pre = neu(x)

loss = cost(pre, y)

optimizer.zero_grad()

loss.backward()

optimizer.step()

# 每隔100步輸出一下損失值(loss)

if i % 100==0:

print(i, np.mean(batch_loss))

使用pytorch構建神經網路

介紹 從學習神經網路到現在時間也不短了,由於個人數學能力有限,用numpy構建神經網路,實屬力不從心,但還是將神經網路的基本步驟理清了,然後開始學習用pytorch搭建神經網路。以下記錄構建神經網路的簡單方法。import numpy as np import torch n,d in,h,d ou...

pytorch學習 構建卷積神經網路

本文是對隧道 https org tutorials beginner blitz neural networks tutorial.html sphx glr beginner blitz neural networks tutorial py 的總結。其中 部分按照自己的習慣有所變動。構建神經網...

利用pytorch構建簡單神經網路

省略了資料集的處理過程 轉為tensor x torch.tensor input features,dtype float y torch.tensor labels,dtype float 權重引數初始化,設計網路結構 輸入348 14 weights torch.randn 14 128 dt...