keras中獲取層輸出shape的方法彙總

2021-09-05 09:41:49 字數 3698 閱讀 6314

【時間】2018.12.24

【題目】keras中獲取層輸出shape的方法彙總

在keras 中,要想獲取層輸出shape,可以先獲取層物件,再通過層物件的屬性output或者output_shape獲得層輸出shape(若要獲取層輸入shape,可以用input/input_shape)。兩者的輸出分別為:

output:

output_shape:

獲取層物件的方法有兩種,一種是使用model.get_layer()方法,另一種是使用model.layers[index]。

當然,你也可以使用model.summary()列印出整個模型,從而可以獲取層輸出shape。

使用model.get_layer(self,name=none,index=none):依據層名或下標獲得層物件model.get_layer(self,name=none,index=none)

具體為:

1.1 特定層輸出:

model.get_layer(index=0).output或者

model.get_layer(index=0).output_shape

1.2 所有層的輸出

for i in range(len(model.layers)):

print(model.get_layer(index=i).output)

使用model.layers[index]獲取層物件,其餘與方法一類似。

2.1 特定層輸出:

model.layers[0].output或者

model.layers[0].output_shape

2.2 所有層的輸出

for layer in model.layers:

print(layer.output)

【測試**】

from keras.models import model

from keras.layers import dense, dropout, activation, flatten,input

from keras.layers import conv2d, maxpooling2d

x = input(shape=(96,96,3))

conv1_1 = conv2d(64,kernel_size=(3,3),padding='valid', activation='relu', name='conv_1')(x)

conv1_2 = conv2d(64,kernel_size=(3,3),padding='same', activation='relu', name='conv_2')(conv1_1)

pool1_1 = maxpooling2d((2, 2), strides=(2, 2), name='pool_frame_1')(conv1_2)

conv1_3 = conv2d(128,kernel_size=(3,3),padding='same', activation='relu', name='conv_3')(pool1_1)

conv1_4 = conv2d(128,kernel_size=(3,3),padding='same', activation='relu', name='conv_4')(conv1_3)

pool1_2 = maxpooling2d((2,2), strides=(2, 2), name='pool_frame_2')(conv1_4)

conv_5 = conv2d(256,kernel_size=(3,3),padding='same', activation='relu', name='conv_5')(pool1_2)

conv_6 = conv2d(256,kernel_size=(3,3),padding='same', activation='relu', name='conv_6')(conv_5)

conv_7 = conv2d(256,kernel_size=(3,3),padding='same', activation='relu', name='conv_7')(conv_6)

pool_3 = maxpooling2d((2,2), strides=(2, 2), name='pool_final_3')(conv_7)

conv_8 = conv2d(512,kernel_size=(3,3),padding='same', activation='relu', name='conv_8')(pool_3)

conv_9 = conv2d(512,kernel_size=(3,3),padding='same', activation='relu', name='conv_9')(conv_8)

conv_10 = conv2d(512,kernel_size=(3,3),padding='same', activation='relu', name='conv_10')(conv_9)

pool_4 = maxpooling2d((2,2), strides=(2, 2), name='pool_final_4')(conv_10)

conv_11 = conv2d(512,kernel_size=(3,3),padding='same', activation='relu', name='conv_11')(pool_4)

conv_12 = conv2d(512,kernel_size=(3,3),padding='same', activation='relu', name='conv_12')(conv_11)

conv_13 = conv2d(512,kernel_size=(3,3),padding='same', activation='relu', name='conv_13')(conv_12)

pool_5 = maxpooling2d((2,2), strides=(2, 2), name='pool_final_5')(conv_13)

flatten = flatten()(pool_5)

fc1= dense(256, activation='relu')(flatten)

out_put = dense(2, activation='softmax')(fc1)

model = model(inputs=x, outputs=out_put)

model.compile(loss='categorical_crossentropy',

optimizer='adadelta',

metrics=['accuracy'])

print('method 3:')

model.summary()  # method 3

print('method 1:')

for i in range(len(model.layers)):

print(model.get_layer(index=i).output)

print('method 2:')

for layer in model.layers:

print(layer.output_shape)

【執行結果】

Keras 獲取模型內部單層輸出

使用自編碼器訓練模型後,需要獲取模型內部某層的輸出,構成新的對映模型。載入模型from keras.models import load model model load model model.h5 定義輸出中間層layer模型from keras.models import model 若在mo...

Keras 中自定義層

keras中自定義層非常普遍。對於簡單,無狀態的自定義操作,可以通過 layers.core.lambda層來實現。對於包含了可訓練權重的自定義層,就需要自己手動去實現。需要實現三個方法 build input shape 定義權重,這個方法必須設 self.built true,可以通過呼叫 su...

神經網路 keras中的層

core highway層 highway層建立全連線的highway網路,這是lstm在前饋神經網路中的推廣 convolution separableconvolution2d層 該層是對2d輸入的可分離卷積。可分離卷積首先按深度方向進行卷積 對每個輸入通道分別卷積 然後逐點進行卷積,將上一步的...