Autograd 自動求導

2021-09-29 19:05:35 字數 2632 閱讀 5707

import torch

# 建立張量並設定requires_grad=true來追蹤其計算歷史

x = torch.ones(2,

2, requires_grad=

true

)print

(x)

tensor([[1., 1.],

[1., 1.]], requires_grad=true)

y = x +

2print

(y)

tensor([[3., 3.],

[3., 3.]], grad_fn=)

print

(y.grad_fn)

z = y * y *

3out = z.mean(

)print

(z, out)

tensor([[27., 27.],

[27., 27.]], grad_fn=) tensor(27., grad_fn=)

# .requires_grad_(...)可以改變現有張量的requires_grad屬性,預設flag為false

a = torch.randn(2,

2)a =(

(a *3)

/(a -1)

)print

(a.requires_grad)

a.requires_grad_(

true

)print

(a.requires_grad)

b =(a * a)

.sum()

print

(b.grad_fn)

false

true

x1 = torch.ones(2,

2,requires_grad=

true

)z1 =3*

(x1 +2)

*(x1 +2)

out = z1.mean(

)# z1 = 3(x1+2)^2, out = mean(z1)

print

("x1 = "

, x1)

print

("z1 = "

, z1)

# out是乙個純量(scalar),out.backward()相當於out.backward(torch,tensor(1))

out.backward(

)# x.grad 梯度d(out)/dx

print

(x1.grad)

# 清零

x1.grad.data.zero_(

)

x1 =  tensor([[1., 1.],

[1., 1.]], requires_grad=true)

z1 = tensor([[27., 27.],

[27., 27.]], grad_fn=)

tensor([[4.5000, 4.5000],

[4.5000, 4.5000]])

tensor([[0., 0.],

[0., 0.]])

x = torch.randn(

3, requires_grad=

true

)y = x *

2while y.data.norm(

)<

1000

:# .data.norm() 張量l2範數

y = y *

2print

("y ="

,y)gradients = torch.tensor(

[0.1

,1.0

,0.0001

], dtype=torch.

float

)y.backward(gradients)

print

("grad ="

, x.grad)

y = tensor([1203.4269, 1136.8713,  854.0352], grad_fn=)

grad = tensor([2.0480e+02, 2.0480e+03, 2.0480e-01])

print

(x.requires_grad)

print

((x **2)

.requires_grad)

# 將變數包裹在with torch.no_grad()中,可以暫時遮蔽autograd計算

with torch.no_grad():

print

((x **2)

.requires_grad)

print

((x **2)

.requires_grad)

true

true

false

true

Autograd 自動求導

import torch 建立張量並設定requires grad true來追蹤其計算歷史 x torch.ones 2,2,requires grad true print x tensor 1.1.1.1.requires grad true y x 2print y tensor 3.3.3...

pytoch總結autograd自動求導02

1,pytorch中的自動求導機制 autograd模組 1 神經網路求導的核心包是autograd,使用時匯入import torch.autograd 2 定義tensor時格式如下 torch.tensor data,dtype none,device none,requires grad n...

Autograd 自動微分

1 深度學習的演算法本質上是通過反向傳播求導數,pytorch的autograd模組實現了此功能 在tensor上的所有操作,autograd都能為他們自動提供微分,避免手動計算導數的複雜過程。2 autograd.variable是autograd中的核心類,它簡單的封裝了tensor,並支援幾乎...