神經網路對比

2021-09-22 17:15:30 字數 4365 閱讀 6911

層數固定不變

層數可以變化

'''

11行神經網路①

固定三層,兩類

'''#只適合 0, 1 兩類。若不是,要先轉化

import

numpy as np

x = np.array([[0,0,1],[0,1,1],[1,0,1],[1,1,1]])

y = np.array([0,1,1,0]).reshape(-1,1) # 此處reshape是為了便於演算法簡潔實現

wi = 2*np.random.randn(3,5) - 1wh = 2*np.random.randn(5,1) - 1

for j in range(10000):

li =x

lh = 1/(1+np.exp(-(np.dot(li,wi))))

lo = 1/(1+np.exp(-(np.dot(lh,wh))))

lo_delta = (y - lo)*(lo*(1-lo))

lh_delta = np.dot(lo_delta, wh.t) * (lh * (1-lh))

wh +=np.dot(lh.t, lo_delta)

wi +=np.dot(li.t, lh_delta)

print('

訓練結果:

', lo)

'''

11行神經網路①

層數可變,兩類

'''#

只適合 0, 1 兩類。若不是,要先轉化

import

numpy as np

x = np.array([[0,0,1],[0,1,1],[1,0,1],[1,1,1]])

y = np.array([0,1,1,0]).reshape(-1,1) # 此處reshape是為了便於演算法簡潔實現

neurals = [3,15,1]

w = [np.random.randn(i,j) for i,j in zip(neurals[:-1], neurals[1:])] +[none]

l = [none] *len(neurals)

l_delta = [none] *len(neurals)

for j in range(1000):

l[0] =x

for i in range(1, len(neurals)):

l[i] = 1 / (1 + np.exp(-(np.dot(l[i-1], w[i-1]))))

l_delta[-1] = (y - l[-1]) * (l[-1] * (1 - l[-1]))

for i in range(len(neurals)-2, 0, -1):

l_delta[i] = np.dot(l_delta[i+1], w[i].t) * (l[i] * (1 -l[i]))

for i in range(len(neurals)-2, -1, -1):

w[i] += np.dot(l[i].t, l_delta[i+1])

print('

訓練結果:

', l[-1])

層數固定不變

層數可以變化

'''

11行神經網路①

固定三層,多類

'''import

numpy as np

x = np.array([[0,0,1],[0,1,1],[1,0,1],[1,1,1]])

#y = np.array([0,1,1,0]) # 可以兩類

y = np.array([0,1,2,3]) #

可以多類

wi = np.random.randn(3,5)

wh = np.random.randn(5,4) #

改bh = np.random.randn(1,5)

bo = np.random.randn(1,4) #

改epsilon = 0.01 #

學習速率

lamda = 0.01 #

正則化強度

for j in range(1000):

li =x

lh = np.tanh(np.dot(li, wi) + bh) #

tanh 函式

lo = np.exp(np.dot(lh, wh) +bo)

probs = lo / np.sum(lo, axis=1, keepdims=true)

#後向傳播

lo_delta =np.copy(probs)

lo_delta[range(x.shape[0]), y] -= 1lh_delta = np.dot(lo_delta, wh.t) * (1 - np.power(lh, 2))

#更新權值、偏置

wh -= epsilon * (np.dot(lh.t, lo_delta) + lamda *wh)

wi -= epsilon * (np.dot(li.t, lh_delta) + lamda *wi)

bo -= epsilon * np.sum(lo_delta, axis=0, keepdims=true)

bh -= epsilon * np.sum(lh_delta, axis=0)

print('

訓練結果:

', np.argmax(probs, axis=1))

'''

11行神經網路①

層數可變,多類

'''import

numpy as np

x = np.array([[0,0,1],[0,1,1],[1,0,1],[1,1,1]])

#y = np.array([0,1,1,0]) # 可以兩類

y = np.array([0,1,2,3]) #

可以多類

neurals = [3, 10, 8, 4]

w = [np.random.randn(i,j) for i,j in zip(neurals[:-1], neurals[1:])] +[none]

b = [none] + [np.random.randn(1,j) for j in neurals[1:]]

l = [none] *len(neurals)

l_delta = [none] *len(neurals)

epsilon = 0.01 #

學習速率

lamda = 0.01 #

正則化強度

for j in range(1000):

#前向傳播

l[0] =x

for i in range(1, len(neurals)-1):

l[i] = np.tanh(np.dot(l[i-1], w[i-1]) + b[i]) #

tanh 函式

l[-1] = np.exp(np.dot(l[-2], w[-2]) + b[-1])

probs = l[-1] / np.sum(l[-1], axis=1, keepdims=true)

#後向傳播

l_delta[-1] =np.copy(probs)

l_delta[-1][range(x.shape[0]), y] -= 1

for i in range(len(neurals)-2, 0, -1):

l_delta[i] = np.dot(l_delta[i+1], w[i].t) * (1 - np.power(l[i], 2)) #

tanh 函式的導數

#更新權值、偏置

b[-1] -= epsilon * np.sum(l_delta[-1], axis=0, keepdims=true)

for i in range(len(neurals)-2, -1, -1):

w[i] -= epsilon * (np.dot(l[i].t, l_delta[i+1]) + lamda *w[i])

if i == 0: break

b[i] -= epsilon * np.sum(l_delta[i], axis=0)

#列印損失

if j % 100 ==0:

loss = np.sum(-np.log(probs[range(x.shape[0]), y]))

loss += lamda/2 * np.sum([np.sum(np.square(wi)) for wi in w[:-1]]) #

可選 loss *= 1/x.shape[0] #

可選print('

loss:

', loss)

print('

訓練結果:

', np.argmax(probs, axis=1))

神經網路對比

層數固定不變 層數可以變化 11行神經網路 固定三層,兩類 只適合 0,1 兩類。若不是,要先轉化 import numpy as np x np.array 0,0,1 0,1,1 1,0,1 1,1,1 y np.array 0,1,1,0 reshape 1,1 此處reshape是為了便於演...

神經網路 啟用函式對比

本部落格僅為作者記錄筆記之用,不免有很多細節不對之處。還望各位看官能夠見諒,歡迎批評指正。日常 coding 中,我們會很自然的使用一些啟用函式,比如 sigmoid relu等等。不過好像忘了問自己一 n 件事 為什麼需要啟用函式?啟用函式都有哪些?都長什麼樣?有哪些優缺點?怎麼選用啟用函式?此圖...

神經網路加速引擎對比調研

引擎名稱 mnntensorrt tf2ncnn paddle lite tengine mobileaibench open vino 開源機構 訓練框架 tensorflow lite caffe onnx tensorflow caffe onnx pytorch mxnet theano p...