keras 構建卷積神經網路人臉識別

2021-09-30 14:19:24 字數 4175 閱讀 7200

olivettifaces是紐約大學的乙個比較小的人臉庫,由40個人的400張構成,即每個人的人臉為10張。每張的灰度級為8位,每個畫素的灰度大小位於0-255之間,每張大小為64×64。大小是1190*942,一共有20*20張人臉,故每張人臉大小是(1190/20)*(942/20)即57*47=2679本文所用的訓練資料就是這張,400個樣本,40個類別。

input_faces.py讀取資料集

#coding:utf-8

"""read_data_sets

"""from pil import image

import numpy as np

defread_data_sets

(dataset_path):

img = image.open(dataset_path)

#np.asarray()將轉為灰度

img_ndarray = np.asarray(img,dtype='float64')/256

#共有400張,每張大小57*47=2679

faces = np.empty((400,2679))

#將存入faces中

for row in range(20):

for column in range(20):

#第一行的第一張臉存入face0,每張臉的大小為57*47

faces[row*20+column] = np.ndarray.flatten(img_ndarray[row*57:(row+1)*57,column*47:(column+1)*47])

#將標籤存入label中,400個label標籤,40類

label = np.empty(400)

for i in range(40):

label[i*10:i*10+10] = i

label = label.astype(np.int)

#設定train320,test80

train_data = np.empty((320,2679))

train_label = np.empty(320)

test_data = np.empty((80,2679))

test_label = np.empty(80)

#將資料集存入data,label中

for i in range(40):

#前8張存入train,第9-10張存入test,依此順序存400張

train_data[i*8:i*8+8] = faces[i*10:i*10+8]

train_label[i*8:i*8+8] = label[i*10:i*10+8]

test_data[i*2:i*2+2] = faces[i*10+8:i*10+10]

test_label[i*2:i*2+2] = label[i*10+8:i*10+10]

data = [(train_data,train_label),(test_data,test_label)]

return data

#(train_data, train_label), (test_data, test_label) = read_data_sets('olivettifaces.gif')

構建卷積神經網路識別人臉:

#coding:utf-8

"""python 2.7

keras 2.04

"""import numpy as np

from keras.utils import np_utils

from keras.models import sequential

from keras.layers import dense,activation,convolution2d,maxpooling2d,flatten

from keras.optimizers import sgd

from sklearn.metrics import confusion_matrix,classification_report

import input_faces

import datetime

start_time = datetime.datetime.now()

#設定隨機種子

np.random.seed(1000)

(train_data,train_labels),(test_data,test_labels) = input_faces.read_data_sets('olivettifaces.gif')

#reshape

train_data = train_data.reshape(train_data.shape[0],1,57,47)

train_labels = np_utils.to_categorical(train_labels,nb_classes=40)

test_data = test_data.reshape(test_data.shape[0],1,57,47)

test_labels = np_utils.to_categorical(test_labels,nb_classes=40)

#構建模型

model = sequential()

#(none,1,57,47) ---> (none,5,57,47)

model.add(convolution2d(nb_filter=5,nb_row=3,nb_col=3,border_mode='same',input_shape=(1,57,47)))

#(none,5,57,47) --->(none,5,28,23)

model.add(maxpooling2d(pool_size=(2,2),border_mode='same'))

#(none,5,28,23) --->(none,10,28,23)

model.add(convolution2d(nb_filter=10,nb_row=3,nb_col=3,border_mode='same'))

#(none,10,28,23)---(none,10,14,11)

model.add(maxpooling2d(pool_size=(2,2),border_mode='same'))

model.add(flatten())

model.add(dense(1000))

model.add(activation('relu'))

model.add(dense(40))

model.add(activation('softmax'))

model.compile(optimizer=sgd(lr=0.01,decay=1e-6,momentum=0.9),loss='categorical_crossentropy',metrics=['accuracy'])

model.fit(train_data,train_labels,nb_epoch=10,batch_size=40,shuffle=true,verbose=1)

test_score,test_accuracy = model.evaluate(test_data,test_labels)

predictions = model.predict_clases(test_data,batch_size=10)

#混淆矩陣

print(confusion_matrix(test_labels,predictions))

#report

print(classification_report(test_labels,np.array(predictions)))

#模型訓練了多久

end_time = datetime.datetime.now()

total_time = (end_time - start_time).seconds

print('total time is:',total_time)

檢測準確度達到0.97

model.fit()中有兩個引數,shuffle和validation_split,validation_split將從訓練集中抽取部分作為驗證集,shuffle用於將資料打亂。如果訓練的資料不是打亂的,當採用validation_split時可能會選取到只是正樣本或負樣本,再經過打亂後在測試集上表現會很差

卷積神經網路 Keras 由淺入深

python mathematics 卷積神經網路能夠有效的處理影象檔案,當然換一種說法就是能夠有效處理矩陣。其關鍵部分就是 卷積核 過濾器 的生成。當然還有一些其他的基礎操作。對於卷積核 卷積核的特徵 init filters,kernel size,strides 1 1 padding val...

keras卷積神經網路舉例

特徵圖深度在增加 從32到128,但尺寸在變小 from keras import layers from keras import model 輸入尺寸為150 150 3,展平後是7 7 128 model model.sequential 二維卷積核 提取的圖塊大小 一般3 3 9個特徵向量,...

Keras實現卷積神經網路

1 coding utf 8 2 3created on sun jan 20 11 25 29 201945 author zhen 6 78 import numpy as np 9from keras.datasets import mnist 10from keras.models impo...