202_PyTorch中文教程:变量-Variable
torch中的变量(Variable)用于构建计算图(computational graph),但是相对Tensorflow和Theano的静态图不同的是,Torch的graph是动态的。 因此,torch中不需要占位符(placeholder),可以直接传递变量(variable)到计算图中。
依赖软件包
- torch
import torch
from torch.autograd import Variable
tensor = torch.FloatTensor([[1,2],[3,4]]) # build a tensor
variable = Variable(tensor, requires_grad=True) # build a variable, usually for compute gradients
print(tensor) # [torch.FloatTensor of size 2x2]
print(variable) # [torch.FloatTensor of size 2x2]
tensor([[1., 2.],
[3., 4.]])
tensor([[1., 2.],
[3., 4.]], requires_grad=True)
到此,tensor 和 variable 看起来一样。
但是, tvariable是graph的一部分, 而不是auto-gradient的一部分。
t_out = torch.mean(tensor*tensor) # x^2
v_out = torch.mean(variable*variable) # x^2
print(t_out)
print(v_out)
7.5
Variable containing:
7.5000
[torch.FloatTensor of size 1]
v_out.backward() # backpropagation from v_out
$$ v_{out} = {{1} \over {4}} sum(variable^2) $$
the gradients w.r.t the variable,
$$ {d(v_{out}) \over d(variable)} = {{1} \over {4}} 2 variable = {variable \over 2}$$
let's check the result pytorch calculated for us below:
variable.grad
Variable containing:
0.5000 1.0000
1.5000 2.0000
[torch.FloatTensor of size 2x2]
variable # this is data in variable format
Variable containing:
1 2
3 4
[torch.FloatTensor of size 2x2]
variable.data # this is data in tensor format
1 2
3 4
[torch.FloatTensor of size 2x2]
variable.data.numpy() # numpy format
array([[ 1., 2.],
[ 3., 4.]], dtype=float32)
Note that we did .backward()
on v_out
but variable
has been assigned new values on it's grad
.
As this line
v_out = torch.mean(variable*variable)
will make a new variable v_out
and connect it with variable
in computation graph.
type(v_out)
torch.autograd.variable.Variable
type(v_out.data)
torch.FloatTensor