keras訓練和載入自定義的損失函式

2021-10-05 06:24:42 字數 3952 閱讀 1684

keras 自定義loss 函式

有的時候keras 裡面提供的loss函式不能滿足我們的需求,我們就需要自己去提供loss函式, 比如dice-loss。

dice-loss 一般是dice-coef 取反, 因此先求dice-coef:

import keras.backend as k

def dice_coef(y_true, y_pred, smooth, thresh):

y_pred = y_pred > thresh

y_true_f = k.flatten(y_true)

y_pred_f = k.flatten(y_pred)

intersection = k.sum(y_true_f * y_pred_f)

return (2. * intersection + smooth) / (k.sum(y_true_f) + k.sum(y_pred_f) + smooth)

但是keras的loss函式只能傳y_true, y_pred作為引數,因此我們使用function closure來實現, 就是用函式來返回函式:

def dice_loss(smooth, thresh):

def dice(y_true, y_pred)

return -dice_coef(y_true, y_pred, smooth, thresh)

return dice

# build model 

model = my_model()

# get the loss function

model_dice = dice_loss(smooth=1e-5, thresh=0.5)

# compile model

model.compile(loss=model_dice)

具體例項如下:

多分類語義分割損失函式:

#多分類dice

def dice_coef_fun(smooth=0.001):

def dice_coef(y_true, y_pred):

#求得每個sample的每個類的dice

intersection = k.sum(y_true * y_pred, axis=(1,2,3))

union = k.sum(y_true, axis=(1,2,3)) + k.sum(y_pred, axis=(1,2,3))

sample_dices=(2. * intersection + smooth) / (union + smooth) #一維陣列 為各個類別的dice

#求得每個類的dice

dices=k.mean(sample_dices,axis=0)

return k.mean(dices) #所有類別dice求平均的dice

return dice_coef

def dice_coef_loss_fun(smooth=0.001):

def dice_coef_loss(y_true,y_pred):

return 1-dice_coef_fun(smooth=smooth)(y_true=y_true,y_pred=y_pred)

return dice_coef_loss

#二分類

#dice loss1

def dice_coef(y_true, y_pred, smooth):

#y_pred =k.cast((k.greater(y_pred,thresh)), dtype='float32')#轉換為float型

#y_pred = y_pred[y_pred > thresh]=1.0

y_true_f =y_true# k.flatten(y_true)

y_pred_f =y_pred# k.flatten(y_pred)

# print("y_true_f",y_true_f.shape)

# print("y_pred_f",y_pred_f.shape)

intersection = k.sum(y_true_f * y_pred_f,axis=(0,1,2))

denom =k.sum(y_true_f,axis=(0,1,2)) + k.sum(y_pred_f,axis=(0,1,2))

return k.mean((2. * intersection + smooth) /(denom + smooth))

def dice_loss(smooth):

def dice(y_true, y_pred):

# print("y_true_f",y_true.shape)

# print("y_pred_f",y_pred.shape)

return 1-dice_coef(y_true, y_pred, smooth)

return dice

模型訓練: 

model_dice=dice_coef_loss_fun(smooth=1e-5)

model.compile(optimizer = nadam(lr = 2e-4), loss = model_dice, metrics = ['accuracy'])

model_dice=dice_loss(smooth=1e-5)

# model_dice=generalized_dice_loss_fun(smooth=1e-5)

# model.compile(optimizer = nadam(lr = 2e-4), loss = "binary_crossentropy", metrics = ['accuracy'])

model.compile(optimizer = nadam(lr = 2e-4), loss = model_dice, metrics = ['accuracy'])

模型載入:

model=load_model("unet_membrane_int16.hdf5",custom_objects=)

model=load_model("vnet_s_extend_epoch110.hdf5",custom_objects=)

上面的損失函式寫的麻煩,也可以這麼寫:

# parameter for loss function

# metric function and loss function

def dice_coef(y_true, y_pred):

amooth=0.0005

y_true_f = k.flatten(y_true)

y_pred_f = k.flatten(y_pred)

intersection = k.sum(y_true_f * y_pred_f)

return (2. * intersection + smooth) / (k.sum(y_true_f) + k.sum(y_pred_f) + smooth)

def dice_coef_loss(y_true, y_pred):

return -dice_coef(y_true, y_pred)

# load model

weight_path = './weights.h5'

model = load_model(weight_path,custom_objects=)

另外,在轉化keras版本的fastercnn時候,**中含有自定義的中間層roipoolingconv,切記初始化__init__中形參必須實力初始化才能正常載入model,

參考:

Android中自定義的載入對話方塊和載入條

先分享乙個常用的轉動形式載入對話方塊。這個是很早前乙個應用,一哥們寫的控制項。後來發現聯想的應用中基本所用應用載入框都是這個。開源 沒版權一說吧 控制項比較簡單,分享下思路 1.首先這是乙個自定義的dialog,重寫了dialog,系統的progressdialog也是繼承了dialog。autho...

Keras 自定義層

keras自定義或者重寫層,需要實現三個方法 問題 參考 keras 自定義層 最後舉乙個conditional layer normalization的例子 基於conditional layer normalization的條件文字生成 自定義層需要實現三個方法 class layernorma...

keras自定義層

keras學習筆記 二 keras實現自定義層 例1,參考博文1中的自定義層attentiondecoder。def build self,input shape for model details that correspond to the matrices here.self.batch si...