## PyTorch入门笔记一 原

仪山湖

``````>>> from __future__ import print_function
>>> import torch
>>> x = torch.rand(5, 3)
>>> print(x)
tensor([[0.5555, 0.7301, 0.5655],
[0.9998, 0.1754, 0.7808],
[0.5512, 0.8162, 0.6148],
[0.8618, 0.3293, 0.6236],
[0.2787, 0.0943, 0.2074]])``````

``>>> x = torch.zeros(5, 3, dtype=torch.long)``

``````>>> x = torch.tensor([5.5, 3])
>>> print(x)
tensor([5.5000, 3.0000])``````

``````>>> x = x.new_ones(5, 3, dtype=torch.double)
>>> print(x)
tensor([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]], dtype=torch.float64)
>>> y = torch.rand_like(x, dtype=torch.float)
>>> print(y)
tensor([[0.6934, 0.9637, 0.0594],
[0.0863, 0.6638, 0.4728],
[0.3416, 0.0892, 0.1761],
[0.6831, 0.6404, 0.8307],
[0.6254, 0.4180, 0.2174]])``````

``````>>> print(x.size())
torch.Size([5, 3])
``````

``````>>> x=torch.rand(5, 3)
>>> y = torch.zeros(5, 3)
>>> print(x + y)
tensor([[0.8991, 0.9222, 0.2050],
[0.2478, 0.7688, 0.4156],
[0.4055, 0.9526, 0.2559],
[0.9481, 0.8576, 0.4816],
[0.0767, 0.3346, 0.0922]])

tensor([[0.8991, 0.9222, 0.2050],
[0.2478, 0.7688, 0.4156],
[0.4055, 0.9526, 0.2559],
[0.9481, 0.8576, 0.4816],
[0.0767, 0.3346, 0.0922]])

>>> result = torch.empty(5, 3)
tensor([[0.8991, 0.9222, 0.2050],
[0.2478, 0.7688, 0.4156],
[0.4055, 0.9526, 0.2559],
[0.9481, 0.8576, 0.4816],
[0.0767, 0.3346, 0.0922]])

tensor([[0.8991, 0.9222, 0.2050],
[0.2478, 0.7688, 0.4156],
[0.4055, 0.9526, 0.2559],
[0.9481, 0.8576, 0.4816],
[0.0767, 0.3346, 0.0922]])
``````

``````>>> print(y[:, 1])
tensor([0.9222, 0.7688, 0.9526, 0.8576, 0.3346])``````

iew函数改变tensor的形状，类似numpy的reshape

``````>>> x = torch.randn(4, 4)
>>> y = x.view(16)  # 变成1x16的张量
>>> z = x.view(-1, 8)  # 变成第二维度是8，第一维度自动计算的张量，结果是2x8的张量
>>> print(x.size(), y.size(), z.size())
torch.Size([4, 4]) torch.Size([16]) torch.Size([2, 8])``````

``````>>> x = torch.randn(1)
>>> print(x)
tensor([0.8542])
>>> print(x.item())
0.8541867136955261
``````

``````>>> x = torch.rand(5, 3)
>>> x.numpy()
array([[0.9320856 , 0.473859  , 0.6787642 ],
[0.14365482, 0.1112923 , 0.8280207 ],
[0.4609589 , 0.51031697, 0.15313298],
[0.18854082, 0.4548    , 0.49709243],
[0.8351501 , 0.6160053 , 0.61391556]], dtype=float32)``````

``````import numpy as np
a = np.ones(5)
b = torch.from_numpy(a)
print(a)
print(b)``````

``````if torch.cuda.is_available():
device = torch.device("cuda")          # a CUDA device object
y = torch.ones_like(x, device=device)  # 直接在GPU设备上创建
x = x.to(device)                       # or just use strings ``.to("cuda")``
z = x + y
print(z)
print(z.to("cpu", torch.double))       # ``.to`` can also change dtype together!``````

``````from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F

class Net(nn.Module):
## 定义了网络的结构
def __init__(self):
super(Net, self).__init__()
## input is channel 1, output 6 channels with 3x3 convulutionanl kernel
self.conv1 = nn.Conv2d(1, 6, 3)
self.conv2 = nn.Conv2d(6, 16, 3)
# an affine operation: y = Wx + b,  # 6*6 from image dimension
self.fc1 = nn.Linear(16*6*6, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)

## 前向传播，函数名必须是forward
def forward(self, x):
# Max pooling over a (2, 2) window
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
# If the size is a square you can only specify a single number
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = x.view(-1, self.num_flat_features(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x

def num_flat_features(self, x):
size = x.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features

## 新建一个Net对象
net = Net()
print(net)
params = list(net.parameters())
print(len(params))
print(params[0].size())  # conv1's .weight

# 声明一个1x1x32x32的4维张量作为网络的输入
input = torch.randn(1, 1, 32, 32)
# input = torch.randn(1, 1, 32, 32)
output = net(input)

# out.backward(torch.randn(1, 10))
target = torch.randn(10)
target = target.view(1, -1)
criterion = nn.MSELoss()
loss = criterion(output, target)
print(loss)

``````

``````net.zero_grad()     # zeroes the gradient buffers of all parameters

loss.backward()

使用SGD更新权重

`公式：weight = weight - learning_rate * gradient`

`可以用下面的torch代码实现`

``````learning_rate = 0.01
for f in net.parameters():
``````

``````import torch.optim as optim

optimizer = optim.SGD(net.parameters(), lr=0.01)

output = net(input)
loss = criterion(output, target)
loss.backward()
optimizer.step()    # Does the update``````

### 仪山湖

PyTorch：60分钟入门学习

2018/01/15
0
0
PyTorch 官方中文教程包含 60 分钟快速入门教程，强化教程

PyTorch 是一个基于 Torch 的 Python 开源机器学习库，用于自然语言处理等应用程序。它主要由 d 的人工智能小组开发，不仅能够 实现强大的 GPU 加速，同时还支持动态神经网络，这一点是现在很...

08/14
0
0
PyTorch 60 分钟入门教程

PyTorch 60 分钟入门教程：PyTorch 深度学习官方入门中文教程 http://pytorchchina.com/2018/06/25/what-is-pytorch/ PyTorch 60 分钟入门教程：自动微分 http://pytorchchina.com/2018/12/...

2018/12/25
0
0
PyTorch 你想知道的都在这里

2018/10/20
0
0

OSCHINA 本期高手问答(1 月 10 日 - 1 月 16 日)我们请来了@tmux 陈云为大家解答关于深度学习框架 PyTorch 方面的问题。 陈云，Python 程序员、Linux 爱好者和 PyTorch 源码贡献者。主要研究...

2018/01/09
4.6K
37

springboot2.0 maven打包分离lib，resources

springboot将工程打包成jar包后，会出现获取classpath下的文件出现测试环境正常而生产环境文件找不到的问题，这是因为 1、在调试过程中，文件是真实存在于磁盘的某个目录。此时通过获取文件路...

6
0
BootStrap

wytao1995

10
0

8
0
《JAVA核心知识》学习笔记（6. Spring 原理）-5

Shingfi

8
0
Excel导入数据库数据+Excel导入网页数据【实时追踪】

1.Excel导入数据库数据：数据选项卡------>导入数据 2.Excel导入网页数据【实时追踪】：

11
1