TFboy養成記 多層感知器 MLP

2022-08-23 05:00:13 字數 3988 閱讀 3267

這裡多層感知器**寫的是乙個簡單的三層神經網路,輸入層,隱藏層,輸出層。**的目的是你和乙個二次曲線。同時,為了保證資料的自然,新增了mean為0,steddv為0.05的雜訊。

新增層**:

def addlayer(inputs,insize,outsize,activ_func =none):#insize outsize表示輸如輸出層的大小,inputs是輸入。activ_func是啟用函式,輸出層沒有啟用函式。預設啟用函式為空

with tf.name_scope(name = "

layer"):

with tf.name_scope(

"weigths"):

weights = tf.variable(tf.random_normal([insize,outsize]),name = "w"

) bias = tf.variable(tf.zeros([1,outsize]),name = "

bias")

w_plus_b = tf.matmul(inputs,weights)+bias

if activ_func ==none:

return

w_plus_b

else

:

return activ_func(w_plus_b)

輸入:

1 with tf.name_scope(name = "

inputs

"):#with這個主要是用來在tensorboard上顯示用。

2 xs = tf.placeholder(tf.float32,[none,1],name = "

x_input

")#不是-1哦

3 ys = tf.placeholder(tf.float32,[none,1],name = "

y_input")

4 l1 = addlayer(xs,1,10,activ_func=tf.nn.relu)

5 y_pre = addlayer(l1,10,1,activ_func=none)

其他部分:

需要注意的是

1 with tf.name_scope("

loss

"):

2 loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys-y_pre),

3 reduction_indices=[1]))#這裡reduction_indices=[1]類似於numpy中的那種用法,是指橫向還是豎向,reduce_sum函式貌似主要是用於矩陣的,向量可以不使用

4 with tf.name_scope("

train"):

5 train_step = tf.train.gradientdescentoptimizer(0.1).minimize(loss)

6 #在以後的版本中,這裡的initialize_all_variable()可能被逐步拋棄使用global_variable_init(大概是這麼寫的)那個函式。歡迎指正。

7 init =tf.initialize_all_variables()#init這一步很重要,在訓練前一定要是使用sess.run(init)操作(只要是你用到了variable)

8 writer = tf.summary.filewriter("

logs/

",sess.graph)

9with tf.session() as sess:

1011

sess.run(init)

1213

for i in range(1000):

14 sess.run(train_step,feed_dict =)

15if i % 50 ==0:

16print(sess.run(loss,feed_dict = ))#只要是你的操作中有涉及到placeholder一定要記得使用feed_dict

所有**:

1

#-*- coding: utf-8 -*-

2"""

3created on tue jun 13 15:41:23 201745

@author: jarvis

6"""78

import

tensorflow as tf

9import

numpy as np

1011

def addlayer(inputs,insize,outsize,activ_func =none):

12 with tf.name_scope(name = "

layer"):

13 with tf.name_scope("

weigths"):

14 weights = tf.variable(tf.random_normal([insize,outsize]),name = "w"

)15 bias = tf.variable(tf.zeros([1,outsize]),name = "

bias")

16 w_plus_b = tf.matmul(inputs,weights)+bias

17if activ_func ==none:

18return

w_plus_b

19else:20

return

activ_func(w_plus_b)

21 x_data = np.linspace(-1,1,300)[:,np.newaxis]

22 noise = np.random.normal(0,0.05,x_data.shape)

23 y_data = np.square(x_data)-0.5+noise

2425 with tf.name_scope(name = "

inputs"):

26 xs = tf.placeholder(tf.float32,[none,1],name = "

x_input

")#不是-1哦

27 ys = tf.placeholder(tf.float32,[none,1],name = "

y_input")

28 l1 = addlayer(xs,1,10,activ_func=tf.nn.relu)

29 y_pre = addlayer(l1,10,1,activ_func=none)

30 with tf.name_scope("

loss

"):

31 loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys-y_pre),

32 reduction_indices=[1]))

33 with tf.name_scope("

train"):

34 train_step = tf.train.gradientdescentoptimizer(0.1).minimize(loss)

3536 init =tf.initialize_all_variables()

37 writer = tf.summary.filewriter("

logs/

",sess.graph)

38with tf.session() as sess:

3940

sess.run(init)

4142

for i in range(1000):

43 sess.run(train_step,feed_dict =)

44if i % 50 ==0:

45print(sess.run(loss,feed_dict = ))

view code

多層感知器

在介紹單層感知器的時候,我們提到對於非線性可分問題,單層感知器是很難解決的,比如下面這個例子 很簡單的乙個分布,但事實上就是無法用直線進行分類,後來就出現了多層感知器,主要改變的地方是把第一層的感知器的輸出作為第二層感知器的輸入,即使只是簡單新增一層感知器,也足以解決xor問題,關鍵的原因是,多了一...

多層感知器(MLP)

一.簡述多層感知器 mlp 1.深度前饋網路 deep feedforward network 也叫前饋神經網路 feedforward neuarl network 或者多層感知機 multilayer perceptron,mlp 是典型的深度學習模型。這種模型被稱為前向 feedforward...

10 28 多層感知器

loc函式 通過行索引 index 中的具體值來取行資料 如取 index 為 a 的行 iloc函式 通過行號來取行資料 如取第二行的資料 pandas中loc和iloc函式用法詳解 原始碼 例項 import tensorflow as tf from tensorflow.keras impo...