Keras學習筆記3 Keras Layer自定義

2021-10-01 09:35:04 字數 2608 閱讀 1555

實現乙個簡單層需要首先繼承 layers.layer 類即可,如下是官方**上的例子:

from keras import backend as k

from keras.engine.topology import layer

import numpy as np

class mylayer(layer):

def __init__(self, output_dim, **kwargs):

self.output_dim = output_dim

super(mylayer, self).__init__(**kwargs)

def build(self, input_shape):

# create a trainable weight variable for this layer.

self.kernel = self.add_weight(name='kernel',

shape=(input_shape[1], self.output_dim),

initializer='uniform',

trainable=true)

super(mylayer, self).build(input_shape) # be sure to call this somewhere!

def call(self, x):

return k.dot(x, self.kernel)

def compute_output_shape(self, input_shape):

return (input_shape[0], self.output_dim)

如上所示, 其中有三個函式需要我們自己實現:

正常dl都是乙個forward, backword, update 三個流程,而在 keras 中對於單層 layer 來說,通過將可訓練的權應該在這裡被加入列表`self.trainable_weights中。其他的屬性還包括self.non_trainabe_weights(列表)和self.updates(需要更新的形如(tensor, new_tensor)的tuple的列表)。你可以參考batchnormalization層的實現來學習如何使用上面兩個屬性。這個方法必須設定self.built = true,可通過呼叫super([layer],self).build()實現

詳細檢視了下 add_weight 函式實現如下(keras/engine/topology.py):

def add_weight(self,

name,

shape,

dtype=none,

initializer=none,

regularizer=none,

trainable=true,

constraint=none):

"""adds a weight variable to the layer.

# arguments

name: string, the name for the weight variable.

shape: the shape tuple of the weight.

dtype: the dtype of the weight.

initializer: an initializer instance (callable).

regularizer: an optional regularizer instance.

trainable: a boolean, whether the weight should

be trained via backprop or not (assuming

that the layer itself is also trainable).

constraint: an optional constraint instance.

# returns

the created weight variable.

"""initializer = initializers.get(initializer)

if dtype is none:

dtype = k.floatx()

weight = k.variable(initializer(shape),

dtype=dtype,

name=name,

constraint=constraint)

if regularizer is not none:

self.add_loss(regularizer(weight))

if trainable:

else:

return weight

從上述**來看通過 add_weight 建立的引數,通過 regularizer 函式來計算 loss, 如果 trainable 設定 true ,則該生成的 self._trainable_weights, 可以通過 regularizer 來構建 loss

具體訓練過程參見: keras/engine/training.py

Keras學習筆記

手冊 keras中文文件 1.張量 一階張量是向量,二階張量是矩陣。2.batch batch gradient descent,遍歷全部資料集算一次損失函式,然後算函式對各個引數的梯度,更新梯度。太慢。stochastic gradient descent,每看乙個資料就算一下損失函式,然後求梯度...

Keras筆記 3 關於Keras的模型型別

keras有兩種型別的模型,序貫模型 sequential 和函式式模型 model 函式式模型應用更為廣泛,序貫模型是函式式模型的一種特殊情況。兩類模型有一些方法是相同的 config model.get config model model.from config config or,for s...

keras學習筆記1 Keras模組概述

keras主要包括14個模組,本文主要對models layers initializations activations objectives optimizers preprocessing metrics共計8個模組分別展開介紹,並通過乙個簡單的bp神經網路說明各個模組的作用。1.model ...