MNIST手寫識別(二)

2021-08-20 09:19:31 字數 1655 閱讀 7524

通過tensorflow實現多層感知機,識別mnist資料集,最終正確率98%左右。

## 通過tensorflow實現多層感知機,識別mnist資料集

from tensorflow.examples.tutorials.mnist import input_data

import tensorflow as tf

mnist = input_data.read_data_sets("mnist_data/",one_hot=true)

sess = tf.interactivesession()

# 給隱含層引數設定variable並初始化

in_units = 784 #輸入節點數

h1_units = 300 #隱含層輸出節點數(此模型中200-1000區別都不大)

w1 = tf.variable(tf.truncated_normal([in_units, h1_units], stddev=0.1)) #權重初始化為標準差為0.1的截斷的正態分佈

b1 = tf.variable(tf.zeros([h1_units])) #偏置初始化為0

w2 = tf.variable(tf.zeros([h1_units, 10]))

b2 = tf.variable(tf.zeros([10]))

# 定義輸入x的placeholder

x = tf.placeholder(tf.float32, [none, in_units])

keep_prob = tf.placeholder(tf.float32) #keep_prob通常訓練時小於1,測試時等於1

# 定義模型結構,首先定義隱含層

hidden1 = tf.nn.relu(tf.matmul(x, w1) + b1)

hidden1_drop = tf.nn.dropout(hidden1, keep_prob)

y = tf.nn.softmax(tf.matmul(hidden1_drop, w2) + b2)

# 定義損失函式cross_entropy

y_ = tf.placeholder(tf.float32, [none, 10])

cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y),reduction_indices=[1]))

# 優化器選擇自適應優化器adagrad

train_step = tf.train.adagradoptimizer(0.3).minimize(cross_entropy)

# 訓練模型

tf.global_variables_initializer().run()

for i in range(3000):

batch_xs, batch_ys = mnist.train.next_batch(100)

train_step.run()

# 對準確率進行驗證

correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(y_,1))

accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))

print(accuracy.eval())

mnist手寫數字識別

import tensorflow as tf import numpy as np from tensorflow.contrib.learn.python.learn.datasets.mnist import read data sets mnist read data sets f pyth...

MNIST手寫數字識別 tensorflow

神經網路一半包含三層,輸入層 隱含層 輸出層。如下圖所示 現以手寫數字識別為例 輸入為784個變數,輸出為10個節點,10個節點再通過softmax啟用函式轉化為 值。如下,準確率可達0.9226 import tensorflow as tf from tensorflow.examples.tu...

DNN識別mnist手寫數字

提取碼 sg3f 導庫import numpy as np import matplotlib.pyplot as plt import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers...