_夜枫

# 原文链接

PyTorch由4个主要包装组成：

1. Torch：类似于Numpy的通用数组库，可以在将张量类型转换为（torch.cuda.TensorFloat）并在GPU上进行计算。
3. torch.nn：具有共同层和成本函数的神经网络库

1.导入工具

``import torch # arrays on GPUimport torch.autograd as autograd #build a computational graphimport torch.nn as nn # neural net libraryimport torch.nn.functional as F # most non-linearities are hereimport torch.optim as optim # optimization package``

2.torch数组取代了numpy ndarray - >在GPU支持下提供线性代数

``# 2 matrices of size 2x3 into a 3d tensor 2x2x3d=[[[1., 2.,3.],[4.,5.,6.]],[[7.,8.,9.],[11.,12.,13.]]]d=torch.Tensor(d) # array from python listprint "shape of the tensor:",d.size()# the first index is the depthz=d[0]+d[1]print "adding up the two matrices of the 3d tensor:",zshape of the tensor: torch.Size([2, 2, 3])adding up the two matrices of the 3d tensor:   8  10  12 15  17  19[torch.FloatTensor of size 2x3]# a heavily used operation is reshaping of tensors using .view()print d.view(2,-1) #-1 makes torch infer the second dim  1   2   3   4   5   6  7   8   9  11  12  13[torch.FloatTensor of size 2x6]``

• 使用x.data访问其值。
• 在.Variable（）上执行操作，绘制图形的边缘。
``# d is a tensor not a node, to create a node based on it:x= autograd.Variable(d, requires_grad=True)print "the node's data is the tensor:", x.data.size()print "the node's gradient is empty at creation:", x.grad # the grad is empty right nowthe node's data is the tensor: torch.Size([2, 2, 3])the node's gradient is empty at creation: None# do operation on the node to make a computational graphy= x+1z=x+ys=z.sum()print s.creator<torch.autograd._functions.reduce.Sum object at 0x7f1e59988790># calculate gradientss.backward()print "the variable now has gradients:",x.gradthe variable now has gradients: Variable containing:(0 ,.,.) =   2  2  2  2  2  2(1 ,.,.) =   2  2  2  2  2  2[torch.FloatTensor of size 2x2x3]``

4.torch.nn包含各种NN层（张量行的线性映射）+（非线性）-->

``# linear transformation of a 2x5 matrix into a 2x3 matrixlinear_map=nn.Linear(5,3)print "using randomly initialized params:", linear_map.parametersusing randomly initialized params: <bound method Linear.parameters of Linear (5 -> 3)># data has 2 examples with 5 features and 3 targetdata=torch.randn(2,5) # trainingy=autograd.Variable(torch.randn(2,3)) # target# make a nodex=autograd.Variable(data, requires_grad=True)# apply transformation to a node creates a computational grapha=linear_map(x)z=F.relu(a)o=F.softmax(z)print "output of softmax as a probability distribution:", o.data.view(1,-1)# loss functionloss_func=nn.MSELoss() #instantiate loss functionL=loss_func(z,y) # calculateMSE loss between output and targetprint "Loss:", Loutput of softmax as a probability distribution:  0.2092  0.1979  0.5929  0.4343  0.3038  0.2619[torch.FloatTensor of size 1x6]Loss: Variable containing: 2.9838[torch.FloatTensor of size 1]``

• 定义自定义层时，需要实现2个功能：
• init_函数必须始终被继承，然后层的所有参数必须在这里定义为类变量（self.x）
``class Log_reg_classifier(nn.Module):    def __init__(self, in_size,out_size):        super(Log_reg_classifier,self).__init__() #always call parent's init         self.linear=nn.Linear(in_size, out_size) #layer parameters    def forward(self,vect):        return F.log_softmax(self.linear(vect)) # ``

5.torch.optim也可以做优化—>

``optimizer=optim.SGD(linear_map.parameters(),lr=1e-2) # instantiate optimizer with model params + learning rate# epoch loop: we run following until convergenceoptimizer.zero_grad() # make gradients zeroL.backward(retain_variables=True)optimizer.step()print LVariable containing: 2.9838[torch.FloatTensor of size 1]``

``# define modelmodel = Log_reg_classifier(10,2)# define loss functionloss_func=nn.MSELoss() # define optimizeroptimizer=optim.SGD(model.parameters(),lr=1e-1)# send data through model in minibatches for 10 epochsfor epoch in range(10):    for minibatch, target in data:        model.zero_grad() # pytorch accumulates gradients, making them zero for each minibatch        #forward pass        out=model(autograd.Variable(minibatch))        #backward pass         L=loss_func(out,target) #calculate loss        L.backward() # calculate gradients        optimizer.step() # make an update step``

# 原文链接

### _夜枫

PyTorch 1.0 预览版发布：90% 的功能能经受住业界的考验

10 月 3 日，在首届 PyTorch 开发者大会上，Facebook 正式发布 PyTorch 1.0 开发者预览版，在带来 PyTorch 1.0 一系列更新的同时，还重点介绍了该框架的生态支持和教育方面的合作。 早在今年...

10/03
0
0
Keras vs PyTorch：谁是「第一」深度学习框架？

选自Deepsense.ai 　　作者：Rafa Jakubanis、Piotr Migdal 　　机器之心编译 　　参与：路、李泽南、李亚洲 　　 　　「第一个深度学习框架该怎么选」对于初学者而言一直是个头疼的问题...

06/30
0
0
5行代码秀碾压，比Keras还好用的fastai来了，尝鲜PyTorch 1.0必备伴侣

10/03
0
0
PyTorch：60分钟入门学习

01/15
0
0
PyTorch 1.0 正式公开，Caffe2并入PyTorch实现AI研究和生产一条龙

05/03
0
0

Linux命令备忘录： jobs 显示Linux中的任务列表及任务状态命令

jobs命令用于显示Linux中的任务列表及任务状态，包括后台运行的任务。该命令可以显示任务号及其对应的进程号。其中，任务号是以普通用户的角度进行的，而进程号则是从系统管理员的角度来看的...

47分钟前
1
0

48分钟前
2
0

PageYi
51分钟前
1
0

mo311
54分钟前
4
0

56分钟前
13
0