tensorflow實戰 反向傳播

2021-08-21 03:55:40 字數 3726 閱讀 7646

windows10

anaconda3(64位)

batch_size=8#每次訓練的資料量

seed = 23455#隨機種子

rng=np.random.randomstate(seed)

x=rng.rand(32,2)#產生32行2列的隨機矩陣

y=[[int(x0+x1<1)] for (x0,x1) in x]#如果x0+x1<1則將其對應位置的y賦值為1

print("x:\n",x)

print("y:\n",y)

x =tf.placeholder(tf.float32,shape=(none,2))#佔位

y_=tf.placeholder(tf.float32,shape=(none,1))#標準答案

w1=tf.variable(tf.random_normal([2,3],stddev=1,seed=1))#2行3列隨機數組

w2=tf.variable(tf.random_normal([3,1],stddev=1,seed=1))

a=tf.matmul(x,w1)#矩陣乘法

y=tf.matmul(a,w2)#y為輸出層

loss=tf.reduce_mean(tf.square(y-y_))#方差

train_step=tf.train.gradientdescentoptimizer(0.001).minimize(loss)#梯度下降優化器

with tf.session() as sess:#with上下文控制器,開啟會話後自動關閉會話

init_op =tf.global_variables_initializer()#初始化所有變數

sess.run(init_op)#運算圖得到結果

print("w1:\n",sess.run(w1))

print("w2:\n",sess.run(w2))

steps = 3000#訓練輪數

for i in range(steps):

start = (i*batch_size)%32#控制開始的陣列下標:0、8、16...

end = start +batch_size

sess.run(train_step,feed_dict=)#餵入訓練資料

if i%500 ==0:#每訓練500次,輸出一次loss值

total_loss=sess.run(loss,feed_dict=)#將標準答案餵入,輸出loss值

print("after %d training steps,loss on all data is %g"%(i,total_loss))

print("\n")

print("w1:\n",sess.run(w1))

print("w2:\n",sess.run(w2))

print("test:[0.2,0.3],[0.9/0.7],[1.5,2.1]\nresult:\n",sess.run(y,feed_dict=))#輸入測試資料

x:[[ 0.83494319  0.11482951]

[ 0.66899751  0.46594987]

[ 0.60181666  0.58838408]

[ 0.31836656  0.20502072]

[ 0.87043944  0.02679395]

[ 0.41539811  0.43938369]

[ 0.68635684  0.24833404]

[ 0.97315228  0.68541849]

[ 0.03081617  0.89479913]

[ 0.24665715  0.28584862]

[ 0.31375667  0.47718349]

[ 0.56689254  0.77079148]

[ 0.7321604   0.35828963]

[ 0.15724842  0.94294584]

[ 0.34933722  0.84634483]

[ 0.50304053  0.81299619]

[ 0.23869886  0.9895604 ]

[ 0.4636501   0.32531094]

[ 0.36510487  0.97365522]

[ 0.73350238  0.83833013]

[ 0.61810158  0.12580353]

[ 0.59274817  0.18779828]

[ 0.87150299  0.34679501]

[ 0.25883219  0.50002932]

[ 0.75690948  0.83429824]

[ 0.29316649  0.05646578]

[ 0.10409134  0.88235166]

[ 0.06727785  0.57784761]

[ 0.38492705  0.48384792]

[ 0.69234428  0.19687348]

[ 0.42783492  0.73416985]

[ 0.09696069  0.04883936]]

y:[[1], [0], [0], [1], [1], [1], [1], [0], [1], [1], [1], [0], [0], [0], [0], [0], [0], [1], [0], [0], [1], [1], [0], [1], [0], [1], [1], [1], [1], [1], [0], [1]]

w1:[[-0.81131822  1.48459876  0.06532937]

[-2.4427042   0.0992484   0.59122431]]

w2:[[-0.81131822]

[ 1.48459876]

[ 0.06532937]]

after 0 training steps,loss on all data is 5.13118

after 500 training steps,loss on all data is 0.429111

after 1000 training steps,loss on all data is 0.409789

after 1500 training steps,loss on all data is 0.399923

after 2000 training steps,loss on all data is 0.394146

after 2500 training steps,loss on all data is 0.390597

w1:[[-0.70006633  0.9136318   0.08953571]

[-2.3402493  -0.14641267  0.58823055]]

w2:[[-0.06024267]

[ 0.91956186]

[-0.0682071 ]]

test:[0.2,0.3],[0.9/0.7],[1.5,2.1]

result:

[[ 0.16510932]

[ 0.76494515]

[ 1.24338591]]

TensorFlow實現MNIST反向傳播

coding utf 8 import tensorflow as tf from tensorflow.examples.tutorials.mnist import input data defsigmaprime x 用sigmoid函式的導數更新權重 param x return 更新後的權...

tensorflow reverse 反向傳播

1 tensorflow的反向傳播 import tensorflow as tf import numpy as np batch size 8 seed 23455 基於seed產生隨機數 rng np.random.randomstate seed 隨機數返回32行2列的矩陣 表示32組 體積...

理解back propagation反向傳播

首先需要明白back propagation的作用 深度學習的訓練是成本函式 cost function 最小化的過程,一般採取梯度下降法求解。那麼怎麼計算梯度呢?這就要用到back propagation了。計算乙個數學表示式的梯度是很直接的,但計算是昂貴的。而反向傳播演算法使用簡單的方法有效的減...