5 2 神經網路演算法應用

2021-07-14 11:03:43 字數 3841 閱讀 3488

關於非線性轉化方程(non-linear transformation function)

sigmoid函式(s 曲線)用來作為activation function:

1.1 雙曲函式(tanh)

1.2 邏輯函式(logistic function)

實現乙個簡單的神經網路演算法

neuralnetwork.py

import numpy as np

deftanh

(x):

return np.tanh(x)

deftanh_deriv

(x):

return

1.0-np.tanh(x)*np.tanh(x)

deflogistic

(x):

return

1/(1+np.exp(-x))

deflogistic_deriv

(x):

return logistic(x)*(1-logistic(x))

class

neuralnetwork:

def__init__

(self,layers,activation='tanh'):

""" :param layers: a list containing the number of units in each layer.

should be at least two values

:param activation: the activation function to be used. can be

"logistic" or "tanh"

"""if activation=='logistic':

self.activation=logistic

self.activation_deriv=logistic_deriv

else:

self.activation=tanh

self.activation_deriv=tanh_deriv

self.weights=

# for i in range(1, len(layers)-1 ):

for i in range(1,len(layers)-1):

deffit(self,x,y,learning_rate=0.2,epochs=10000):

x=np.atleast_2d(x)

temp=np.ones([x.shape[0],x.shape[1]+1])

temp[:,0:-1]=x # adding the bias unit to the input layer

x=temp

y=np.array(y)

for k in range(epochs):

i=np.random.randint(x.shape[0])

a=[x[i]]

for l in range(len(self.weights)): #going forward network, for each layer

error=y[i]-a[-1] #computer the error at the top layer

deltas=[error*self.activation_deriv(a[-1])] #for output layer, err calculation (delta is updated error)

#staring backprobagation

for l in range(len(a)-2,0,-1):

#compute the updated error (i,e, deltas) for each node going from top layer to input layer

deltas.reverse()

for i in range(len(self.weights)):

layer=np.atleast_2d(a[i])

delta=np.atleast_2d(deltas[i])

self.weights[i]+=learning_rate*layer.t.dot(delta)

defpredict

(self,x):

x=np.array(x)

temp=np.ones(x.shape[0]+1)

temp[0:-1]=x

a=temp

for l in range(0,len(self.weights)):

a=self.activation(np.dot(a,self.weights[l]))

print('a:',a)

return a

應用一:簡單非線性關係資料集測試(xor):

x: y

0 0 0

0 1 1

1 0 1

1 1 0

code:

from neuralnetwork import neuralnetwork

import numpy as np

nn=neuralnetwork([2,2,1],'tanh')

x=np.array([[0,0],[0,1],[1,0],[1,1]])

y=np.array([0,1,1,0])

nn.fit(x, y)

for i in

[[0,0],[0,1],[1,0],[1,1]]:

print(i,nn.predict(i))

3.2 手寫數字識別:

每個8x8 

識別數字:0,1,2,3,4,5,6,7,8,9

code:

# -*- coding:utf-8 -*-

import numpy as np

from sklearn.datasets import load_digits

from sklearn.metrics import confusion_matrix,classification_report

from sklearn.cross_validation import train_test_split

from sklearn.preprocessing import labelbinarizer

from neuralnetwork import neuralnetwork

digits=load_digits()

x=digits.data

y=digits.target

x-=x.min()

x/=x.max()

nn=neuralnetwork([64,100,10],'logistic')

x_train,x_test,y_train,y_test=train_test_split(x,y)

labels_train=labelbinarizer().fit_transform(y_train)

label_test=labelbinarizer().fit_transform(y_test)

print("starting fit")

nn.fit(x_train, labels_train,epochs=30000)

predictions=

for i in range(x_test.shape[0]):

output=nn.predict(x_test[i])

print confusion_matrix(y_test, predictions)

print classification_report(y_test, predictions)

神經網路演算法

神經網路其實就是按照一定規則連線起來的多個神經元。上圖展示了乙個全連線 full connected,fc 神經網路,通過觀察上面的圖,我們可以發現它的規則包括 1.神經元按照層來布局。最左邊的層叫做輸入層,負責接收輸入資料 最右邊的層叫輸出層,我們可以從這層獲取神經網路輸出資料。輸入層和輸出層之間...

神經網路 卷積神經網路

這篇卷積神經網路是前面介紹的多層神經網路的進一步深入,它將深度學習的思想引入到了神經網路當中,通過卷積運算來由淺入深的提取影象的不同層次的特徵,而利用神經網路的訓練過程讓整個網路自動調節卷積核的引數,從而無監督的產生了最適合的分類特徵。這個概括可能有點抽象,我盡量在下面描述細緻一些,但如果要更深入了...

神經網路 卷積神經網路

1.卷積神經網路概覽 來自吳恩達課上一張,通過對應位置相乘求和,我們從左邊矩陣得到了右邊矩陣,邊緣是白色寬條,當畫素大一些時候,邊緣就會變細。觀察卷積核,左邊一列權重高,右邊一列權重低。輸入,左邊的部分明亮,右邊的部分灰暗。這個學到的邊緣是權重大的寬條 都是30 表示是由亮向暗過渡,下面這個圖左邊暗...