tensorflow啟用函式

2021-10-07 08:21:08 字數 4349 閱讀 7949

# encoding: utf-8

# 案例一

import tensorflow as tf

import numpy as np

seed =

23455

cost =

1#成本

profit =

99#利潤

rdm = np.random.randomstate(seed)

x = rdm.rand(32,

2)y_ =

[[x1 + x2 +

(rdm.rand()/

10.0

-0.05)]

for(x1, x2)

in x]

# 生成雜訊[0,1)/10=[0,0.1); [0,0.1)-0.05=[-0.05,0.05)

x = tf.cast(x, dtype=tf.float32)

w1 = tf.variable(tf.random.normal([2

,1], stddev=

1, seed=1)

)epoch =

10000

lr =

0.002

for epoch in

range

(epoch)

:with tf.gradienttape(

)as tape:

y = tf.matmul(x, w1)

loss = tf.reduce_sum(tf.where(tf.greater(y, y_)

,(y - y_)

* cost,

(y_ - y)

* profit)

) grads = tape.gradient(loss, w1)

w1.assign_sub(lr * grads)

if epoch %

500==0:

print

("after %d training steps,w1 is "

%(epoch)

)print

(w1.numpy(),

"\n"

)print

("final w1 is: "

, w1.numpy())

# 自定義損失函式

# 酸奶成本1元, 酸奶利潤99元

# 成本很低,利潤很高,人們希望多**些,生成模型係數大於1,往多了**

after 0 training steps,w1 is 

[[2.8786578]

[3.2517848]]

after 500 training steps,w1 is

[[1.1460369]

[1.0672572]]

after 1000 training steps,w1 is

[[1.1364173]

[1.0985414]]

after 1500 training steps,w1 is

[[1.1267972]

[1.1298251]]

after 2000 training steps,w1 is

[[1.1758107]

[1.1724023]]

after 2500 training steps,w1 is

[[1.1453722]

[1.0272155]]

after 3000 training steps,w1 is

[[1.1357522]

[1.0584993]]

after 3500 training steps,w1 is

[[1.1261321]

[1.0897831]]

after 4000 training steps,w1 is

[[1.1751455]

[1.1323601]]

after 4500 training steps,w1 is

[[1.1655253]

[1.1636437]]

after 5000 training steps,w1 is

[[1.1350871]

[1.0184573]]

after 5500 training steps,w1 is

[[1.1254673]

[1.0497413]]

after 6000 training steps,w1 is

[[1.1158477]

[1.0810255]]

after 6500 training steps,w1 is

[[1.1062276]

[1.1123092]]

after 7000 training steps,w1 is

[[1.1552413]

[1.1548865]]

after 7500 training steps,w1 is

[[1.1248026]

[1.0096996]]

after 8000 training steps,w1 is

[[1.1151826]

[1.0409834]]

after 8500 training steps,w1 is

[[1.1055626]

[1.0722672]]

after 9000 training steps,w1 is

[[1.1545763]

[1.1148446]]

after 9500 training steps,w1 is

[[1.144956]

[1.146128]]

final w1 is: [[1.1255957]

[1.0237043]]

#案例二

# 交叉熵損失函式,更小更接近

loss_ce1 = tf.losses.categorical_crossentropy([1

,0],

[0.6

,0.4])

loss_ce2 = tf.losses.categorical_crossentropy([1

,0],

[0.8

,0.2])

print

("loss_ce1:"

, loss_ce1)

print

("loss_ce2:"

, loss_ce2)

loss_ce1: tf.tensor(0.5108256, shape=(), dtype=float32)

loss_ce2: tf.tensor(0.22314353, shape=(), dtype=float32)

# 案例三

# softmax與交叉熵損失函式的結合,softmax轉換成符合概率分布

y_ = np.array([[

1,0,

0],[

0,1,

0],[

0,0,

1],[

1,0,

0],[

0,1,

0]])

y = np.array([[

12,3,

2],[

3,10,

1],[

1,2,

5],[

4,6.5,

1.2],[

3,6,

1]])

y_pro = tf.nn.softmax(y)

loss_ce1 = tf.losses.categorical_crossentropy(y_,y_pro)

loss_ce2 = tf.nn.softmax_cross_entropy_with_logits(y_, y)

print

('分步計算的結果:\n'

, loss_ce1)

print

('結合計算的結果:\n'

, loss_ce2)

分步計算的結果:

tf.tensor(

[1.68795487e-04 1.03475622e-03 6.58839038e-02 2.58349207e+00

5.49852354e-02], shape=(5,), dtype=float64)

結合計算的結果:

tf.tensor(

[1.68795487e-04 1.03475622e-03 6.58839038e-02 2.58349207e+00

5.49852354e-02], shape=(5,), dtype=float64)

Tensorflow 啟用函式

一些常見啟用函式 維基百科 建立輸入資料 x np.linspace 7,7,180 7,7 之間等間隔的 180 個點 啟用函式的原始實現 defsigmoid inputs y 1 float 1 np.exp x for x in inputs return ydef relu inputs ...

tensorflow 損失函式與啟用函式

損失函式用於評價模型的準確度。無論什麼樣的網路結構,如果損失函式不正確,都難以訓練出正確的模型。損失值用於描述 值與真實值之間的差距大小。常用的損失函式是 均方差函式和交叉熵函式。均方差mse tf.reduce mean tf.pow tf.sub logits,outputs 2.0 tenso...

Tensorflow2 0 啟用函式

常用啟用函式及對應特點 神經網路結構的輸出為所有輸入的加權和,這導致整個神經網路是乙個線性模型。而線性模型不能解決異或問題,且面對多分類問題,也顯得束手無策。所以為了解決非線性的分類或回歸問題,啟用函式必須是非線性函式。神經網路中啟用函式的主要作用是提供網路的非線性建模能力。這是因為反向傳播演算法就...