TensorFlow實戰除錯中所遇問題及解決方法2

2021-08-15 19:39:00 字數 4303 閱讀 7758

程式原始碼:

# -*- encoding=utf-8 -*-

from numpy import genfromtxt

import numpy as np

import random

import sys

import csv

import tensorflow as tf

import matplotlib.pyplot as plt

from sklearn import preprocessing

reload(sys)

sys.setdefaultencoding("utf-8")

datapath = r"./bb.csv"

num_epochs1 = 10

rows = 100

turn = 2000

def read_data(file_queue):

#定義reader

reader = tf.textlinereader(skip_header_lines=1)

key, value = reader.read(file_queue)

#定義decoder

#defaults用於指定矩陣格式以及資料型別,csv檔案中的矩陣是m*n的,則此處為1*n的矩陣,比如矩陣中如果有小數,則為float,[1]應該變為[1.0]

defaults = [[''], ['null'], [''], [0.], [0.], [0.], [0.], [0], [""], [0], ['null'], [""]]

#矩陣中有幾列,有幾列需要寫幾個

city, origin, destination, origin_lat, origin_lng, destination_lat, destination_lng, \

distance, weature, duration, week_time, create_time = tf.decode_csv(records=value, record_defaults=defaults)

#return tf.stack([sepallengthcm, sepalwidthcm, petallengthcm, petalwidthcm]), preprocess_op

return distance, duration

def batch_input(filename, num_epochs):

#生成乙個先入先出佇列和乙個queuerunner

file_queue = tf.train.string_input_producer(string_tensor=[filename], num_epochs=10)

example, label = read_data(file_queue)

min_after_dequeue = 100

batch_size = 10

capacity = min_after_dequeue+3*batch_size

example_batch, label_batch = tf.train.shuffle_batch([example, label], batch_size=batch_size, capacity=capacity,

min_after_dequeue=min_after_dequeue)

return example_batch, label_batch

#examplebatch1, labelbatch1 = batch_input(datapath, num_epochs=100)

with tf.session() as sess:

init_op = tf.group(tf.global_variables_initializer(), tf.local_variables_initializer())

sess.run(init_op)

examplebatch1, labelbatch1 = batch_input(datapath, num_epochs=100)

coord = tf.train.coordinator()

threads = tf.train.start_queue_runners(sess=sess, coord=coord)

try:

while not coord.should_stop():

example_batch, label_batch = sess.run([examplebatch1, labelbatch1])

print("example_batch is:")

print(example_batch)

except tf.errors.outofrangeerror:

print('done training -- epoch limit reached')

finally:

coord.request_stop()

coord.join(threads)

以上程式完成的功能是批量獲得csv檔案中的資料,但在執行時出現以下錯誤:

file "better_nonlinear_one_input_batch1.py", line 64, in

coord.join(threads)

file "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/coordinator.py", line 389, in join

six.reraise(*self._exc_info_to_raise)

file "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/queue_runner_impl.py", line 238, in _run

enqueue_callable()

file "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1231, in _single_operation_run

target_list_as_strings, status, none)

file "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/errors_impl.py", line 473, in __exit__

c_api.tf_getcode(self.status.status))

tensorflow.python.framework.errors_impl.failedpreconditionerror: attempting to use uninitialized value input_producer/limit_epochs/epochs

[[node: input_producer/limit_epochs/countupto = countupto[t=dt_int64, _class=["loc:@input_producer/limit_epochs/epochs"], limit=10, _device="/job:localhost/replica:0/task:0/device:cpu:0"](input_producer/limit_epochs/epochs)]]

上網查詢類似錯誤,得到的結果都說要加入初始化,即以下兩行:

init_op = tf.group(tf.global_variables_initializer(), tf.local_variables_initializer())

sess.run(init_op)

但實際上**中已經加入了這2行**,問題依然存在。一時一頭霧水,懷疑難道是例程都存在問題?決定一點一點查詢原因,在這裡要感謝

如果沒有它提供的程式,我可能還在苦苦摸索。此網頁中作者也是和我遇到了相同的問題,但是他的程式加入上述2句程式後就正常執行了。深入細節,一句一句查詢區別,最終定位了問題:上邊源程式中有這樣一行**:

examplebatch1, labelbatch1 = batch_input(datapath, num_epochs=100),

其位置很關鍵,將上邊貼的源程式中

examplebatch1, labelbatch1 = batch_input(datapath, num_epochs=100)這句話放到

with tf.session() as sess:

之上,就不會報錯(參見上述源**中兩個examplebatch1, labelbatch1 = batch_input(datapath, num_epochs=100)的位置),我最初是放到了裡邊,這樣就產生了所遇到的詭異問題。

Tensorflow實戰 張量

import tensorflow as tf tf.constant 是乙個計算,這個計算的結果為乙個張量,儲存在變數a中。a tf.constant 1.0,2.0 name a b tf.constant 2.0,3.0 name b result tf.add a,b,name add pr...

TensorFlow除錯技巧

tensorflow從誕生以來就一直在深度學習框架中穩居老大的位置,雖然自從2018年12月pytorch 1.0 stable版本正式發布以來,很快減小了差距,但是也難以超越。tensorflow的強項在於部署 包括tensorflow lite在移動端部署 和執行效率,另外對各種operatio...

tensorflow實戰 反向傳播

windows10 anaconda3 64位 batch size 8 每次訓練的資料量 seed 23455 隨機種子 rng np.random.randomstate seed x rng.rand 32,2 產生32行2列的隨機矩陣 y int x0 x1 1 for x0,x1 in x...