Keras新增網路層的N種方法

2021-10-09 20:29:18 字數 3657 閱讀 9377

使用keras搭神經網路久了,想整理一下keras新增網路層的n種方法,以防日後忘記。

本文以mnist手寫資料為例,神經網路結構使用lenet講解。

以下是模組的匯入,資料的載入,資料的預處理。

import keras

from keras.datasets import mnist

from keras.layers import conv2d, maxpool2d, activation, dropout, flatten, dense

from keras.losses import categorical_crossentropy

from keras.models import model, sequential, input

from keras.optimizers import sgd

from keras.utils import to_categorical

import numpy as np

(x_train, y_train)

,(x_test, y_test)

= mnist.load_data(

)x_train = np.expand_dims(x_train,3)

x_test = np.expand_dims(x_test,3)

y_train = to_categorical(y_train)

y_test = to_categorical(y_test)

新建乙個sequential物件,然後使用它的add方法,一層層的去加。

model = sequential(

)model.add(conv2d(20,

(3,3

), input_shape=

(x_train.shape[1]

, x_train.shape[2]

, x_train.shape[3]

)))model.add(maxpool2d((2

,2))

)model.add(activation(

'tanh'))

model.add(conv2d(30,

(3,3

)))model.add(maxpool2d((2

,2))

)model.add(activation(

'tanh'))

model.add(dropout(

0.5)

)model.add(flatten())

model.add(dense(

1000))

model.add(dropout(

0.5)

)model.add(dense(y_train.shape[1]

))model.add(activation(

'softmax'))

model.

compile

(optimizer=sgd(learning_rate=

0.01

, momentum=

0.9, nesterov=

true

), loss=categorical_crossentropy, metrics=

['accuracy'])

model.fit(x_train, y_train, batch_size=

1000

, epochs=

10)

在新建sequential物件時,以列表的形式將每一層按順序放到列表裡,作為引數傳進去。

model = sequential(

[ conv2d(20,

(3,3

), input_shape=

(x_train.shape[1]

, x_train.shape[2]

, x_train.shape[3]

)), maxpool2d((2

,2))

, activation(

'tanh'),

conv2d(30,

(3,3

)), maxpool2d((2

,2))

, activation(

'tanh'),

dropout(

0.5)

, flatten(),

dense(

1000),

dropout(

0.5)

, dense(y_train.shape[1]

),activation(

'softmax'),

])model.

compile

(optimizer=sgd(learning_rate=

0.01

, momentum=

0.9, nesterov=

true

), loss=categorical_crossentropy, metrics=

['accuracy'])

model.fit(x_train, y_train, batch_size=

1000

, epochs=

10)

自行定義輸入層,輸出層,從輸入層開始,前向傳播到下一層,最後到達輸出層。然後建立model物件,指定inputs和outputs變數。

x_i = input(

(x_train.shape[1]

, x_train.shape[2]

, x_train.shape[3]

))x = conv2d(20,

(3,3

))(x_i)

x = maxpool2d((2

,2))

(x)x = activation(

'tanh'

)(x)

x = conv2d(30,

(3,3

))(x)x = maxpool2d((2

,2))

(x)x = activation(

'tanh'

)(x)

x = dropout(

0.5)

(x)x = flatten(

)(x)

x = dense(

1000

)(x)

x = dropout(

0.5)

(x)x = dense(y_train.shape[1]

)(x)

pred = activation(

'softmax'

)(x)

model = model(inputs=x_i, outputs=pred)

model.

compile

(optimizer=sgd(learning_rate=

0.01

, momentum=

0.9, nesterov=

true

), loss=categorical_crossentropy, metrics=

['accuracy'])

model.fit(x_train, y_train, batch_size=

1000

, epochs=

10)

退出2層迴圈的n種方法

退出迴圈的n種方法,你能想出幾種?1 增加變數,作為退出條件 bool dobreak false for dobreak 2 使用goto for for outer 3 使用return void do lots of work void 4 巢狀if語句 bool isterminated f...

keras中網路層的訓練

在定義好模型之後,需要進一步對神經網路進行訓練,此時就需要來對訓練方式進行定義,定義語句如下 定義訓練方式 model.compile loss categorical crossentropy,optimizer adam metrics accuracy 其中引數的設定具體如下 loss 損失函...

Keras構建神經網路的3種方法

有三種方法可以在tensorflow中構建keras模型 sequential api 當你試圖使用單個輸入 輸出和層分支構建簡單模型時,sequential api是最好的方法。對於想快速學習的新手來說,這是乙個很好的選擇。functional api 函式api是構建keras模型最流行的方法。...