1 2神經網路實現線性回歸

2021-10-07 22:45:43 字數 1328 閱讀 7766

w = tf.variable(tf.random_uniform([1], -1.0, 1.0), name=『w』)#隨機初始化權重引數-1到1之間

b = tf.variable(tf.zeros([1]), name=『b』)#以0為初始化,[1]表示維度

y = w * x_data + b#目標函式

loss = tf.reduce_mean(tf.square(y - y_data), name=『loss』)#reduce_mean平均值

optimizer = tf.train.gradientdescentoptimizer(0.5)#學習率大了

train = optimizer.minimize(loss, name=『train』)

sess = tf.session()

init = tf.global_variables_initializer()

sess.run(init)

print (「w =」, sess.run(w), 「b =」, sess.run(b), 「loss =」, sess.run(loss))

for step in range(20):

sess.run(train)

# 輸出訓練好的w和b

神經網路多元線性回歸

jupyter notebook import pandas as pd import numpy as np import tensorflow as tf import matplotlib.pyplot as plt matplotlib inline 資料 讀取資料 data pd.read...

神經網路實現非線性回歸

import tensorflow as tf import numpy as np import matplotlib.pyplot as plt 使用numpy生成200個隨機點 x data np.linspace 0.5,0.5,200 np.newaxis noise np.random....

利用神經網路來實現線性回歸

先導入包 matplotlib inline from ipython import display from matplotlib import pyplot as plt from mxnet import autograd,nd import random features為隨機生成服從 0,...