遞迴神經網路RNN

2021-08-15 23:14:46 字數 2437 閱讀 7460

import tensorflow as tf

from tensorflow.examples.tutorials.mnist import input_data

# 載入資料

mnist = input_data.read_data_sets('mnist_data', one_hot=true)

# 輸入是28*28

n_inputs = 28

# 輸入一行,一行有28個資料

max_time = 28

# 一共有28行

lstm_size = 100

# 隱藏單元

n_classes = 10

# 10個分類

batch_size = 50

# 每批次有50個樣本

n_batch = mnist.train.num_examples // batch_size

# 這裡的none表示第乙個唯獨可以是任意長度

x = tf.placeholder(tf.float32, [none, 784])

# 正確的標籤

y = tf.placeholder(tf.float32, [none, 10])

# 初始化權值

weights = tf.variable(tf.truncated_normal([lstm_size, n_classes], stddev=0.1))

# 初始化偏置值

biases = tf.variable(tf.constant(0.1, shape=[n_classes]))

# 定義rnn網路

defrnn

(x, weights, biases):

# inputs = [batch_size, max_time, n_inout]

inputs = tf.reshape(x, [-1, max_time, n_inputs])

# 定義lstm基本cell

lstm_cell = tf.contrib.rnn.core_rnn_cell.basiclstmcell(lstm_size)

# final_state[0] 是cell state

# final_state[1] 是hidden_state

outputs, final_state = tf.nn.dynamic_rnn(lstm_cell, inputs, dtype=tf.float32)

results = tf.nn.softmax(tf.matmul(final_state[1], weights) + biases)

return results

# 計算rnn的返回結果

prediction = rnn(x, weights, biases)

# 損失函式

cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=prediction))

# 使用adamoptimizer進行優化

train_step = tf.train.adamoptimizer(1e-4).minimize(cross_entropy)

# 結果放在乙個布林型列表中

crroect_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(prediction, 1))

# 求準確率

accuracy = tf.reduce_mean(tf.cast(crroect_prediction, tf.float32))

# 初始化

init = tf.global_variables_initializer()

with tf.session() as sess:

sess.run(init)

for epoch in range(6):

for batch in range(n_batch):

batch_xs, batch_ys = mnist.train.next_batch(batch_size)

sess.run(train_step, feed_dict=)

acc = sess.run(accuracy, feed_dict=)

print("iter " + str(epoch) + ", testing accuracy= " + str(acc))

iter 0, testing accuracy= 0.7428

iter 1, testing accuracy= 0.7918

iter 2, testing accuracy= 0.8366

iter 3, testing accuracy= 0.8964

iter 4, testing accuracy= 0.9123

iter 5, testing accuracy= 0.9263

遞迴神經網路RNN

import tensorflow as tf from tensorflow.examples.tutorials.mnist import input data in 2 載入資料集 mnist input data.read data sets mnist data one hot true ...

RNN 迴圈神經網路or遞迴神經網路?

我 內心os 有嗎,我感覺我看到的都是迴圈神經網路啊?我 這個應該就是翻譯的問題吧 回去以後我查了一下,發現我錯了,迴圈神經網路和遞迴神經網路還是有點區別的。很明顯,它倆名字就是不一樣的,迴圈神經網路是recurrent neural network,遞迴神經網路是recursive neural ...

遞迴神經網路 RNN 簡介

首先有請讀者看看我們的遞迴神經網路的容貌 乍一看,好複雜的大傢伙,沒事,老樣子,看我如何慢慢將其拆解,正所謂見招拆招,我們來各個擊破。上圖左側是遞迴神經網路的原始結構,如果先拋棄中間那個令人生畏的閉環,那其實就是簡單 輸入層 隱藏層 輸出層 的三層結構,我們在多層感知器的介紹中已經非常熟悉,然而多了...