BP神經網路實現(python)

2021-10-04 13:31:56 字數 4529 閱讀 6216

最近學習了浙江大學的機器學習課程,又學習了一些內容,用python實現了bp神經網路

**如下:

#定義sigmoid和它的導數

import numpy as np

import random,math

defmake_matrix

(m,n)

:return np.random.random(size=

(m,n)

)def

sigmoid

(x):

return

1.0/

(1.0

+np.exp(

-x))

defsimgmoid_derivate

(x):

return x*(1

-x)#定義類

class

bpneuralnetwork

:def

__init__

(self)

:#輸入層

self.input_n =

0 self.hidden_n =

0 self.output_n =

0 self.input_cells =

self.hidden_cells =

self.output_cells =

self.input_weights =

self.output_weights =

self.input_correction =

self.output_correction =

defset_up

(self,ni,nh,no)

: self.input_n= ni

self.hidden_n = nh

self.output_n = no

#初始化cell

self.input_cells =

[1.0

]* self.input_n

self.hidden_cells =

[1.0

]* self.hidden_n

self.output_cells =

[1.0

]* self.output_n

# 初始化權重

self.input_weights = make_matrix(self.input_n,self.hidden_n)

self.output_weights = make_matrix(self.hidden_n,self.output_n)

#啟用#矯正矩陣

self.input_correction = make_matrix(self.input_n,self.hidden_n)

self.output_correction = make_matrix(self.hidden_n,self.output_n)

#第一次前饋

defpredict

(self,inputs)

:for i in

range

(self.input_n)

: self.input_cells[i]

= inputs[i]

for j in

range

(self.hidden_n)

: total =

0.0for i in

range

(self.input_n)

: total+=self.input_cells[i]

* self.input_weights[i]

[j]#若是連續的值需要修改啟用函式

self.hidden_cells[j]

= sigmoid(total)

#輸出層計算

for k in

range

(self.output_n)

: total =

0.0for j in

range

(self.output_n)

: total += self.hidden_cells[k]

*self.output_weights[k]

[j] self.output_cells[k]

= sigmoid(total)

return self.output_cells[:]

#反向傳播 學習率λ, 矯正率μ

defupdate

(self,x,y,learn,correct)

: self.predict(x)

#計算誤差

output_deltas =

[0.0

]*self.output_n

for o in

range

(self.output_n)

: error = y[0]

-self.output_cells[0]

#如果是連續值則無需計算

output_deltas[o]

= simgmoid_derivate(self.output_cells[o]

)* error

#計算隱藏層的誤差

hidden_deltas =

[0.0

]* self.hidden_n

for h in

range

(self.hidden_n)

:for o in

range

(self.output_n)

: change = output_deltas[o]

* self.hidden_cells[h]

#更新權重

self.output_weights[h]

[o]+= learn * change + correct * self.output_correction[h]

[o] self.output_correction[h]

[o]= change

#更新權重

for i in

range

(self.input_n)

:for h in

range

(self.hidden_n)

: change=hidden_deltas[h]

* self.input_cells[i]

self.input_weights[i]

[h]+= learn * change + correct * self.input_correction[i]

[h] self.input_correction[i]

[h]= change

#獲得全域性的誤差

error =

0.0for o in

range

(len

(y))

: error +=

0.5*

(y[o]

- self.output_cells[o])**

2return error

#開始訓練設定迭代次數 學習率和矯正率

deftrain

(self,x,y,limit=

1000000

,learn =

0.05

,correct =

0.1)

:for m in

range

(limit)

: error =

0.0for i in

range

(len

(x))

:#每次計算乙個

x_item =x[i]

y_item = y[i]

error+= self.update(x_item,y_item,learn,correct)

最後測試一下

cases =[[

1,1]

,[2,

2],[

2,4]

,[1,

2],]

labels =[[

0],[

0],[

1],[

1]]t= bpneuralnetwork(

)t.set_up(2,

5,1)

t.train(cases,labels,

10000

,0.05

,0.1

)for case in cases:

print

(t.predict(case)

)

[0.5285172149874504]

[0.5343332825010273]

[0.5379687945071842]

[0.5336526221969785]

效果並不是很好,可能**需要完善。

bp神經網路python實現

h1 np.maximum 0,np.dot x,w1 b1 計算第乙個隱層的啟用資料 nxh scores np.dot h1,w2 b2 神經元輸出 nxc scores scores np.reshape np.max scores,axis 1 n,1 nxc scores中的每個元素減去這...

C 實現神經BP神經網路

bp.h pragma once include include include include include using std vector using std exp using std cout using std endl class bp bp.cpp include bp.h bp ...

C 實現神經BP神經網路

bp.h pragma once include include include include include using std vector using std exp using std cout using std endl class bp bp.cpp include bp.h bp ...