tensorflow實現簡單的卷積網路

2021-08-16 04:41:38 字數 4633 閱讀 8296

import tensorflow as tf

import gc

from tensorflow.examples.tutorials.mnist import input_data

mnist = input_data.read_data_sets("f:\zxy\python\mnist_data/", one_hot = true)

############建立乙個互動式session#######

sess = tf.interactivesession()

def weight_variable(shape):

initial = tf.truncated_normal(shape, stddev = 0.1)

return tf.variable(initial)

def bias_variable(shape):

initial = tf.constant(0.1, shape = shape)

return tf.variable(initial)

##############2維卷積函式################

def conv2d(x, w):

return tf.nn.conv2d(x, w, strides = [1, 1, 1 ,1], padding = 'same')

##############最大池化###################

def max_pool_2x2(x):

return tf.nn.max_pool(x, ksize = [1, 2, 2, 1], strides = [1, 2, 2, 1], padding = 'same')

#定義輸入,並轉化為28 * 28的

x = tf.placeholder(tf.float32, [none, 784])

y_ = tf.placeholder(tf.float32, [none, 10])

x_image = tf.reshape(x,[-1,28,28,1])#-1:樣本數量不固定 28*28大小,單通道

##############第乙個卷積層###############

w_conv1 = weight_variable([5,5,1,32])#w是5*5大小,單通道,32個卷積核

b_conv1 = bias_variable([32])

h_conv1 = tf.nn.relu(conv2d(x_image, w_conv1) + b_conv1)

h_pool1 = max_pool_2x2(h_conv1)

##############第2個卷積層###############

w_conv2 = weight_variable([5,5,32,64])#w是5*5大小,單通道,32個卷積核

b_conv2 = bias_variable([64])

h_conv2 = tf.nn.relu(conv2d(h_pool1, w_conv2) + b_conv2)

h_pool2 = max_pool_2x2(h_conv2)

#################全連線層###############

w_fc1 = weight_variable([7 * 7 * 64, 1024])

b_fc1 = bias_variable([1024])

h_pool2_flat = tf.reshape(h_pool2,[-1,7*7*64])

h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat,w_fc1) + b_fc1)

#############dropout層#################

keep_prob = tf.placeholder(tf.float32)

h_fc1_drop = tf.nn.dropout(h_fc1,keep_prob)

##############第2層全連線層##################

w_fc2 = weight_variable([1024,10])

b_fc2 = bias_variable([10])

y_conv = tf.nn.softmax(tf.matmul(h_fc1_drop,w_fc2) + b_fc2)

###############損失函式 + 優化器###########

cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y_conv),reduction_indices = [1])) #使用交叉熵驗證輸出和真實值的差別

train_step = tf.train.adamoptimizer(1e-4).minimize(cross_entropy) #使用adam優化損失函式

#############模型準確率###################

correct_prediction = tf.equal(tf.argmax(y_conv,1),tf.argmax(y_,1))

accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

####################訓練過程#################

tf.global_variables_initializer().run()

for i in range(10000):

batch = mnist.train.next_batch(50)

if i%300 == 0:

train_accuracy = accuracy.eval(feed_dict = )

print("step:%d, training accuracy %g"%(i, train_accuracy))

train_step.run(feed_dict = )

print("test accuracy %g " %accuracy.eval(feed_dict=))

結果:

extracting f:\zxy\python\mnist_data/train-images-idx3-ubyte.gz

extracting f:\zxy\python\mnist_data/train-labels-idx1-ubyte.gz

extracting f:\zxy\python\mnist_data/t10k-images-idx3-ubyte.gz

extracting f:\zxy\python\mnist_data/t10k-labels-idx1-ubyte.gz

step:0, training accuracy 0.02

step:300, training accuracy 0.88

step:600, training accuracy 0.98

step:900, training accuracy 0.96

step:1200, training accuracy 0.98

step:1500, training accuracy 0.98

step:1800, training accuracy 0.94

step:2100, training accuracy 0.96

step:2400, training accuracy 1

step:2700, training accuracy 1

step:3000, training accuracy 0.96

step:3300, training accuracy 0.98

step:3600, training accuracy 0.92

step:3900, training accuracy 0.98

step:4200, training accuracy 0.98

step:4500, training accuracy 0.98

step:4800, training accuracy 0.98

step:5100, training accuracy 0.98

step:5400, training accuracy 1

step:5700, training accuracy 1

step:6000, training accuracy 1

step:6300, training accuracy 1

step:6600, training accuracy 1

step:6900, training accuracy 0.98

step:7200, training accuracy 1

step:7500, training accuracy 1

step:7800, training accuracy 1

step:8100, training accuracy 1

step:8400, training accuracy 1

step:8700, training accuracy 1

step:9000, training accuracy 1

step:9300, training accuracy 1

step:9600, training accuracy 0.98

step:9900, training accuracy 1

test accuracy 0.9911 

Tensorflow實現簡單的卷積網路

本文將使用tensorflow實現乙個簡單的卷積神經網路,使用的資料集為mnist,預期可以達到99.2 的準確率。直接上 1.載入資料集。import tensorflow as tf from tensorflow.examples.tutorials.mnist import input da...

tensorflow學習之路 實現簡單的卷積網路

使用tensorflow實現乙個簡單的卷積神經,使用的資料集是mnist,本節將使用兩個卷積層加乙個全連線層,構建乙個簡單有代表性的卷積網路。是按照書上的敲的,第一步就是匯入資料庫,設定節點的初始值,tf.nn.conv2d是tensorflow中的2維卷積,引數x是輸入,w是卷積的引數,比如 5,...

tensorflow實戰 實現簡單的神經網路

from tensorflow.examples.tutorials.mnist import input data import tensorflow as tf mnist input data.read data sets mnist data one hot true sess tf.int...