自动网络搜索AutoDL之PaddlePaddle实现

原创
2020/04/20 15:40
阅读数 1.2K

项目简介

本项目是百度大数据实验室(BDL)分层神经架构搜索项目(HINAS)自动发现的模型,利用深度增强学习完成设计。系统由两部分组成,第一部分是网络结构的编码器,第二部分是网络结构的评测器。

下载安装命令

## CPU版本安装命令
pip install -f https://paddlepaddle.org.cn/pip/oschina/cpu paddlepaddle

## GPU版本安装命令
pip install -f https://paddlepaddle.org.cn/pip/oschina/gpu paddlepaddle-gpu

编码器通常以 RNN 的方式把网络结构进行编码,然后评测器把编码的结果拿去进行训练和评测,拿到包括准确率、模型大小在内的一些指标,反馈给编码器,编码器进行修改,再次编码,如此迭代。经过若干次迭代以后,最终得到一个设计好的模型。

为了性能考虑,迭代中用到的训练数据通常是几万张规模的数据集(比如 CIFAR-10),模型设计结束后,会用大规模数据(比如 ImageNet)重新训练一次,进一步优化参数。具体原理可以参考以下链接:解读百度AutoDL

本项目主要是使用搜索出来的模型结构在CIFAR-10数据上进行训练和验证主要的目录结构如下:

|--root
|--|--build				# 该目录下的文件用于根据不同的配置构建网络
|--|--|--layers.py			# 网络中各种层的实现
|--|--|--resnet_base.py			# 带残差的结构
|--|--|--ops.py				# 调用layers.py中实现的各种层组成op
|--|--|--vgg_base.py			# 不带残差的结构
|--|--tokens				# 通过二进制存储的各种模型的配置
|--|--dataset				# cifar数据集
|--|--model				# 训练完成后保存的可以用于infer的固化模型
|--|--test				# 用于存放需要测试的图像
|--|--reader.py				# 数据集读取部分
|--|--train_hinas_res.py		# 用于训练带残差结构的网络
|--|--train_hinas.py			# 用于训练不带残差结构的网络
|--|--nn_paddle.py			# 具体的训练逻辑以及模型保存写在这个文件中
|--|--infer.py				# 用于对test中的图片进行预测,需要修改文件中图片的路径
In[6]
# 从work中把代码解压出来
!tar xzf data/data9705/cifar-10-python.tar.gz -C dataset/cifar/
mv: cannot move '/home/aistudio/HiNAS_models/build' to '/home/aistudio/build': Directory not empty
mv: cannot move '/home/aistudio/HiNAS_models/tokens' to '/home/aistudio/tokens': Directory not empty
rm: cannot remove 'HiNAS_models': Is a directory
In[1]
# 安装程序依赖的库文件
!pip install absl-py
DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7.
Looking in indexes: https://pypi.mirrors.ustc.edu.cn/simple/
Collecting absl-py
  Downloading https://mirrors.tuna.tsinghua.edu.cn/pypi/web/packages/da/3f/9b0355080b81b15ba6a9ffcf1f5ea39e307a2778b2f2dc8694724e8abd5b/absl-py-0.7.1.tar.gz (99kB)
    100% |████████████████████████████████| 102kB 9.9MB/s ta 0:00:01
Requirement already satisfied: six in /opt/conda/envs/python27-paddle120-env/lib/python2.7/site-packages (from absl-py) (1.12.0)
Requirement already satisfied: enum34 in /opt/conda/envs/python27-paddle120-env/lib/python2.7/site-packages (from absl-py) (1.1.6)
Building wheels for collected packages: absl-py
  Building wheel for absl-py (setup.py) ... done
  Stored in directory: /home/aistudio/.cache/pip/wheels/cc/27/b8/80769636fbf30d2fddba4c6e149163c0a319ba2dfc73f6e660
Successfully built absl-py
Installing collected packages: absl-py
Successfully installed absl-py-0.7.1
 

本目录下包含6个图像分类模型,都是百度大数据实验室 Hierarchical Neural Architecture Search (HiNAS) 项目通过机器自动发现的模型,在CIFAR-10数据集上达到96.1%的准确率。这6个模型分为两类,前3个没有skip link,分别命名为 HiNAS 0-2号,后三个网络带有skip link,功能类似于Resnet中的shortcut connection,分别命名 HiNAS 3-5号

 

使用train_hinas.py --model=model_id来训练没有skip link的HiNAS 0-2号网络模型,model_id代表0,1,2中的一个

In[20]
!python train_hinas.py --model=0
learning rate: 0.100000 -> 0.000100, cosine annealing
epoch: 15
batch size: 128
L2 decay: 0.000400
Token is 7,7,2,5,2,2,8,8,2,3,2,10,8,2,9,11,9,6,4,4,10


sep_3x3 	-> shape (-1L, 64L, 32L, 32L)
sep_3x3 	-> shape (-1L, 64L, 32L, 32L)
conv_3x3 	-> shape (-1L, 128L, 16L, 16L)
============
conv_1x3_3x1 	-> shape (-1L, 128L, 16L, 16L)
conv_3x3 	-> shape (-1L, 128L, 16L, 16L)
conv_3x3 	-> shape (-1L, 128L, 16L, 16L)
maxpool_2x2 	-> shape (-1L, 256L, 8L, 8L)
============
maxpool_2x2 	-> shape (-1L, 256L, 8L, 8L)
conv_3x3 	-> shape (-1L, 256L, 8L, 8L)
dilated_2x2 	-> shape (-1L, 256L, 8L, 8L)
conv_3x3 	-> shape (-1L, 256L, 8L, 8L)
avgpool_2x2 	-> shape (-1L, 512L, 4L, 4L)
============
maxpool_2x2 	-> shape (-1L, 512L, 4L, 4L)
conv_3x3 	-> shape (-1L, 512L, 4L, 4L)
maxpool_3x3 	-> shape (-1L, 512L, 4L, 4L)
avgpool_3x3 	-> shape (-1L, 512L, 4L, 4L)
maxpool_3x3 	-> shape (-1L, 1024L, 2L, 2L)
============
sep_2x2 	-> shape (-1L, 1024L, 2L, 2L)
conv_1x2_2x1 	-> shape (-1L, 1024L, 2L, 2L)
conv_1x2_2x1 	-> shape (-1L, 1024L, 2L, 2L)
avgpool_2x2 	-> shape (-1L, 1024L, 2L, 2L)
W0809 11:16:19.514256   829 device_context.cc:259] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 9.2, Runtime API Version: 9.0
W0809 11:16:19.518366   829 device_context.cc:267] device: 0, cuDNN Version: 7.3.
Reading file cifar-10-batches-py/data_batch_1
Reading file cifar-10-batches-py/data_batch_2
Reading file cifar-10-batches-py/data_batch_3
Reading file cifar-10-batches-py/data_batch_4
Reading file cifar-10-batches-py/data_batch_5
Epoch 0, Step 0, Loss 2.482643, Acc 0.140625
Epoch 0, Step 20, Loss 2.482619, Acc 0.173828
Epoch 0, Step 40, Loss 2.200089, Acc 0.238281
Epoch 0, Step 60, Loss 2.087755, Acc 0.255469
Epoch 0, Step 80, Loss 2.011178, Acc 0.279687
Epoch 0, Step 100, Loss 1.957202, Acc 0.301953
Epoch 0, Step 120, Loss 1.931626, Acc 0.333203
Epoch 0, Step 140, Loss 1.856620, Acc 0.328516
Epoch 0, Step 160, Loss 1.811810, Acc 0.366797
Epoch 0, Step 180, Loss 1.827287, Acc 0.367188
Epoch 0, Step 200, Loss 1.816670, Acc 0.365625
Epoch 0, Step 220, Loss 1.768780, Acc 0.387500
Epoch 0, Step 240, Loss 1.767242, Acc 0.378516
Epoch 0, Step 260, Loss 1.731364, Acc 0.390234
Epoch 0, Step 280, Loss 1.708052, Acc 0.417188
Epoch 0, Step 300, Loss 1.709072, Acc 0.421484
Epoch 0, Step 320, Loss 1.632304, Acc 0.410156
Epoch 0, Step 340, Loss 1.638918, Acc 0.416016
Epoch 0, Step 360, Loss 1.555294, Acc 0.434766
Epoch 0, Step 380, Loss 1.529093, Acc 0.461719
Reading file cifar-10-batches-py/test_batch
Test with epoch 0, Loss 1.537156, Acc 0.445214
Best acc 0.445214
Reading file cifar-10-batches-py/data_batch_1
Reading file cifar-10-batches-py/data_batch_2
Reading file cifar-10-batches-py/data_batch_3
Reading file cifar-10-batches-py/data_batch_4
Reading file cifar-10-batches-py/data_batch_5
Epoch 1, Step 0, Loss 1.557167, Acc 0.439205
Epoch 1, Step 20, Loss 1.497260, Acc 0.455469
Epoch 1, Step 40, Loss 1.411260, Acc 0.481641
Epoch 1, Step 60, Loss 1.394606, Acc 0.506641
Epoch 1, Step 80, Loss 1.401698, Acc 0.496875
Epoch 1, Step 100, Loss 1.397601, Acc 0.501953
Epoch 1, Step 120, Loss 1.382812, Acc 0.509375
Epoch 1, Step 140, Loss 1.349297, Acc 0.523438
Epoch 1, Step 160, Loss 1.380840, Acc 0.502734
Epoch 1, Step 180, Loss 1.303443, Acc 0.542188
Epoch 1, Step 200, Loss 1.280069, Acc 0.546094
Epoch 1, Step 220, Loss 1.315975, Acc 0.528516
Epoch 1, Step 240, Loss 1.268604, Acc 0.554688
Epoch 1, Step 260, Loss 1.274772, Acc 0.545313
Epoch 1, Step 280, Loss 1.176123, Acc 0.580859
Epoch 1, Step 300, Loss 1.226039, Acc 0.558594
Epoch 1, Step 320, Loss 1.226553, Acc 0.561328
Epoch 1, Step 340, Loss 1.189370, Acc 0.590625
Epoch 1, Step 360, Loss 1.226365, Acc 0.559766
Epoch 1, Step 380, Loss 1.207760, Acc 0.582031
Reading file cifar-10-batches-py/data_batch_1
Reading file cifar-10-batches-py/data_batch_2
Reading file cifar-10-batches-py/data_batch_3
Reading file cifar-10-batches-py/data_batch_4
Reading file cifar-10-batches-py/data_batch_5
Epoch 2, Step 0, Loss 1.167169, Acc 0.594744
Epoch 2, Step 20, Loss 1.141616, Acc 0.598828
Epoch 2, Step 40, Loss 1.163119, Acc 0.596875
Epoch 2, Step 60, Loss 1.112664, Acc 0.614844
Epoch 2, Step 80, Loss 1.092213, Acc 0.612109
Epoch 2, Step 100, Loss 1.099594, Acc 0.598437
Epoch 2, Step 120, Loss 1.140377, Acc 0.598828
Epoch 2, Step 140, Loss 1.112193, Acc 0.609766
Epoch 2, Step 160, Loss 1.073125, Acc 0.625781
Epoch 2, Step 180, Loss 1.083796, Acc 0.614062
Epoch 2, Step 200, Loss 1.021422, Acc 0.638672
Epoch 2, Step 220, Loss 1.031961, Acc 0.625781
Epoch 2, Step 240, Loss 1.078565, Acc 0.620703
Epoch 2, Step 260, Loss 1.025523, Acc 0.654297
Epoch 2, Step 280, Loss 0.989794, Acc 0.657422
Epoch 2, Step 300, Loss 1.036911, Acc 0.641016
Epoch 2, Step 320, Loss 0.973976, Acc 0.664062
Epoch 2, Step 340, Loss 1.052846, Acc 0.639844
Epoch 2, Step 360, Loss 0.999093, Acc 0.660937
Epoch 2, Step 380, Loss 0.995234, Acc 0.655859
Reading file cifar-10-batches-py/data_batch_1
Reading file cifar-10-batches-py/data_batch_2
Reading file cifar-10-batches-py/data_batch_3
Reading file cifar-10-batches-py/data_batch_4
Reading file cifar-10-batches-py/data_batch_5
Epoch 3, Step 0, Loss 1.013907, Acc 0.665625
Epoch 3, Step 20, Loss 1.013109, Acc 0.648438
Epoch 3, Step 40, Loss 0.969164, Acc 0.669922
Epoch 3, Step 60, Loss 0.976983, Acc 0.660547
Epoch 3, Step 80, Loss 0.947173, Acc 0.673828
Epoch 3, Step 100, Loss 0.928780, Acc 0.687891
Epoch 3, Step 120, Loss 0.925856, Acc 0.683203
Epoch 3, Step 140, Loss 0.966862, Acc 0.655469
Epoch 3, Step 160, Loss 0.928892, Acc 0.676172
Epoch 3, Step 180, Loss 0.957283, Acc 0.673047
Epoch 3, Step 200, Loss 0.899448, Acc 0.691016
Epoch 3, Step 220, Loss 0.947698, Acc 0.667188
Epoch 3, Step 240, Loss 0.921045, Acc 0.683203
Epoch 3, Step 260, Loss 0.878503, Acc 0.694531
Epoch 3, Step 280, Loss 0.897871, Acc 0.694141
Epoch 3, Step 300, Loss 0.882109, Acc 0.691797
Epoch 3, Step 320, Loss 0.921019, Acc 0.687109
Epoch 3, Step 340, Loss 0.866837, Acc 0.707422
Epoch 3, Step 360, Loss 0.879969, Acc 0.694922
Epoch 3, Step 380, Loss 0.869022, Acc 0.696875
Reading file cifar-10-batches-py/test_batch
Test with epoch 3, Loss 0.760786, Acc 0.740111
Best acc 0.740111
Reading file cifar-10-batches-py/data_batch_1
Reading file cifar-10-batches-py/data_batch_2
Reading file cifar-10-batches-py/data_batch_3
Reading file cifar-10-batches-py/data_batch_4
Reading file cifar-10-batches-py/data_batch_5
Epoch 4, Step 0, Loss 0.887936, Acc 0.686080
Epoch 4, Step 20, Loss 0.862115, Acc 0.712891
Epoch 4, Step 40, Loss 0.880331, Acc 0.693359
Epoch 4, Step 60, Loss 0.904162, Acc 0.693750
Epoch 4, Step 80, Loss 0.874216, Acc 0.697656
Epoch 4, Step 100, Loss 0.839584, Acc 0.708594
Epoch 4, Step 120, Loss 0.869627, Acc 0.686719
Epoch 4, Step 140, Loss 0.851685, Acc 0.707812
Epoch 4, Step 160, Loss 0.827239, Acc 0.714453
Epoch 4, Step 180, Loss 0.849189, Acc 0.711328
Epoch 4, Step 200, Loss 0.855910, Acc 0.711328
Epoch 4, Step 220, Loss 0.807521, Acc 0.723047
Epoch 4, Step 240, Loss 0.857397, Acc 0.710156
Epoch 4, Step 260, Loss 0.814342, Acc 0.718359
Epoch 4, Step 280, Loss 0.828174, Acc 0.715625
Epoch 4, Step 300, Loss 0.787920, Acc 0.730078
Epoch 4, Step 320, Loss 0.806294, Acc 0.732031
Epoch 4, Step 340, Loss 0.837816, Acc 0.698047
Epoch 4, Step 360, Loss 0.755335, Acc 0.741406
Epoch 4, Step 380, Loss 0.789317, Acc 0.723047
Reading file cifar-10-batches-py/data_batch_1
Reading file cifar-10-batches-py/data_batch_2
Reading file cifar-10-batches-py/data_batch_3
Reading file cifar-10-batches-py/data_batch_4
Reading file cifar-10-batches-py/data_batch_5
Epoch 5, Step 0, Loss 0.767572, Acc 0.737358
Epoch 5, Step 20, Loss 0.815318, Acc 0.715625
Epoch 5, Step 40, Loss 0.801015, Acc 0.731641
Epoch 5, Step 60, Loss 0.806025, Acc 0.721484
Epoch 5, Step 80, Loss 0.790205, Acc 0.725781
Epoch 5, Step 100, Loss 0.791391, Acc 0.730859
Epoch 5, Step 120, Loss 0.791687, Acc 0.723828
Epoch 5, Step 140, Loss 0.789758, Acc 0.727344
Epoch 5, Step 160, Loss 0.782047, Acc 0.727344
Epoch 5, Step 180, Loss 0.712768, Acc 0.769141
Epoch 5, Step 200, Loss 0.741269, Acc 0.742969
Epoch 5, Step 220, Loss 0.771898, Acc 0.734375
Epoch 5, Step 240, Loss 0.730375, Acc 0.750391
Epoch 5, Step 260, Loss 0.763598, Acc 0.747266
Epoch 5, Step 280, Loss 0.787794, Acc 0.730859
Epoch 5, Step 300, Loss 0.750475, Acc 0.741016
Epoch 5, Step 320, Loss 0.687740, Acc 0.761719
Epoch 5, Step 340, Loss 0.753891, Acc 0.745703
Epoch 5, Step 360, Loss 0.704342, Acc 0.749219
Epoch 5, Step 380, Loss 0.721551, Acc 0.743359
Reading file cifar-10-batches-py/data_batch_1
Reading file cifar-10-batches-py/data_batch_2
Reading file cifar-10-batches-py/data_batch_3
Reading file cifar-10-batches-py/data_batch_4
Reading file cifar-10-batches-py/data_batch_5
Epoch 6, Step 0, Loss 0.735215, Acc 0.758665
Epoch 6, Step 20, Loss 0.729224, Acc 0.751172
Epoch 6, Step 40, Loss 0.751871, Acc 0.739453
Epoch 6, Step 60, Loss 0.705129, Acc 0.754687
Epoch 6, Step 80, Loss 0.702012, Acc 0.764844
Epoch 6, Step 100, Loss 0.704884, Acc 0.762891
Epoch 6, Step 120, Loss 0.671007, Acc 0.768750
Epoch 6, Step 140, Loss 0.738879, Acc 0.750000
Epoch 6, Step 160, Loss 0.697422, Acc 0.761719
Epoch 6, Step 180, Loss 0.719644, Acc 0.745313
Epoch 6, Step 200, Loss 0.712688, Acc 0.750781
Epoch 6, Step 220, Loss 0.719856, Acc 0.748047
Epoch 6, Step 240, Loss 0.684880, Acc 0.772656
Epoch 6, Step 260, Loss 0.731527, Acc 0.746875
Epoch 6, Step 280, Loss 0.689584, Acc 0.764453
Epoch 6, Step 300, Loss 0.680511, Acc 0.771875
Epoch 6, Step 320, Loss 0.722743, Acc 0.756641
Epoch 6, Step 340, Loss 0.665582, Acc 0.775781
Epoch 6, Step 360, Loss 0.673832, Acc 0.774219
Epoch 6, Step 380, Loss 0.679124, Acc 0.768750
Reading file cifar-10-batches-py/test_batch
Test with epoch 6, Loss 0.603901, Acc 0.793710
Best acc 0.793710
Reading file cifar-10-batches-py/data_batch_1
Reading file cifar-10-batches-py/data_batch_2
Reading file cifar-10-batches-py/data_batch_3
Reading file cifar-10-batches-py/data_batch_4
Reading file cifar-10-batches-py/data_batch_5
Epoch 7, Step 0, Loss 0.727439, Acc 0.755114
Epoch 7, Step 20, Loss 0.653248, Acc 0.771875
Epoch 7, Step 40, Loss 0.644015, Acc 0.777344
Epoch 7, Step 60, Loss 0.638839, Acc 0.782031
Epoch 7, Step 80, Loss 0.655005, Acc 0.784375
Epoch 7, Step 100, Loss 0.685182, Acc 0.764063
Epoch 7, Step 120, Loss 0.638514, Acc 0.784766
Epoch 7, Step 140, Loss 0.678165, Acc 0.765625
Epoch 7, Step 160, Loss 0.653044, Acc 0.767187
Epoch 7, Step 180, Loss 0.696662, Acc 0.754297
Epoch 7, Step 200, Loss 0.601996, Acc 0.795313
Epoch 7, Step 220, Loss 0.627005, Acc 0.790625
Epoch 7, Step 240, Loss 0.658935, Acc 0.773438
Epoch 7, Step 260, Loss 0.682445, Acc 0.773438
Epoch 7, Step 280, Loss 0.619083, Acc 0.795313
Epoch 7, Step 300, Loss 0.586498, Acc 0.798828
Epoch 7, Step 320, Loss 0.689424, Acc 0.768359
Epoch 7, Step 340, Loss 0.627145, Acc 0.790625
Epoch 7, Step 360, Loss 0.603221, Acc 0.791797
Epoch 7, Step 380, Loss 0.637147, Acc 0.787891
Reading file cifar-10-batches-py/data_batch_1
Reading file cifar-10-batches-py/data_batch_2
Reading file cifar-10-batches-py/data_batch_3
Reading file cifar-10-batches-py/data_batch_4
Reading file cifar-10-batches-py/data_batch_5
Epoch 8, Step 0, Loss 0.643534, Acc 0.784233
Epoch 8, Step 20, Loss 0.596453, Acc 0.794922
Epoch 8, Step 40, Loss 0.609944, Acc 0.796875
Epoch 8, Step 60, Loss 0.591742, Acc 0.804688
Epoch 8, Step 80, Loss 0.587857, Acc 0.800781
Epoch 8, Step 100, Loss 0.615226, Acc 0.790625
Epoch 8, Step 120, Loss 0.603721, Acc 0.793359
Epoch 8, Step 140, Loss 0.609922, Acc 0.785937
Epoch 8, Step 160, Loss 0.632628, Acc 0.789844
Epoch 8, Step 180, Loss 0.620946, Acc 0.789453
Epoch 8, Step 200, Loss 0.573387, Acc 0.805078
Epoch 8, Step 220, Loss 0.581413, Acc 0.808203
Epoch 8, Step 240, Loss 0.609865, Acc 0.789453
Epoch 8, Step 260, Loss 0.575533, Acc 0.800391
Epoch 8, Step 280, Loss 0.594663, Acc 0.797266
Epoch 8, Step 300, Loss 0.614344, Acc 0.793359
Epoch 8, Step 320, Loss 0.591097, Acc 0.803516
Epoch 8, Step 340, Loss 0.581555, Acc 0.794531
Epoch 8, Step 360, Loss 0.589037, Acc 0.800000
Epoch 8, Step 380, Loss 0.588127, Acc 0.791797
Reading file cifar-10-batches-py/data_batch_1
Reading file cifar-10-batches-py/data_batch_2
Reading file cifar-10-batches-py/data_batch_3
Reading file cifar-10-batches-py/data_batch_4
Reading file cifar-10-batches-py/data_batch_5
Epoch 9, Step 0, Loss 0.520190, Acc 0.830256
Epoch 9, Step 20, Loss 0.547372, Acc 0.809766
Epoch 9, Step 40, Loss 0.593483, Acc 0.801953
Epoch 9, Step 60, Loss 0.542595, Acc 0.816016
Epoch 9, Step 80, Loss 0.547427, Acc 0.813672
Epoch 9, Step 100, Loss 0.540795, Acc 0.814453
Epoch 9, Step 120, Loss 0.576289, Acc 0.800000
Epoch 9, Step 140, Loss 0.511006, Acc 0.828906
Epoch 9, Step 160, Loss 0.541928, Acc 0.818750
Epoch 9, Step 180, Loss 0.517280, Acc 0.823828
Epoch 9, Step 200, Loss 0.555504, Acc 0.817578
Epoch 9, Step 220, Loss 0.551557, Acc 0.808594
Epoch 9, Step 240, Loss 0.520490, Acc 0.819922
Epoch 9, Step 260, Loss 0.563156, Acc 0.812109
Epoch 9, Step 280, Loss 0.542105, Acc 0.813281
Epoch 9, Step 300, Loss 0.520226, Acc 0.818750
Epoch 9, Step 320, Loss 0.513270, Acc 0.817187
Epoch 9, Step 340, Loss 0.538814, Acc 0.820703
Epoch 9, Step 360, Loss 0.511943, Acc 0.818750
Epoch 9, Step 380, Loss 0.513404, Acc 0.817969
Reading file cifar-10-batches-py/test_batch
Test with epoch 9, Loss 0.431279, Acc 0.855123
Best acc 0.855123
Reading file cifar-10-batches-py/data_batch_1
Reading file cifar-10-batches-py/data_batch_2
Reading file cifar-10-batches-py/data_batch_3
Reading file cifar-10-batches-py/data_batch_4
Reading file cifar-10-batches-py/data_batch_5
Epoch 10, Step 0, Loss 0.506999, Acc 0.823011
Epoch 10, Step 20, Loss 0.490679, Acc 0.835938
Epoch 10, Step 40, Loss 0.514969, Acc 0.823047
Epoch 10, Step 60, Loss 0.478587, Acc 0.835156
Epoch 10, Step 80, Loss 0.514832, Acc 0.826953
Epoch 10, Step 100, Loss 0.500985, Acc 0.826172
Epoch 10, Step 120, Loss 0.508618, Acc 0.824609
Epoch 10, Step 140, Loss 0.476442, Acc 0.842969
Epoch 10, Step 160, Loss 0.483726, Acc 0.827734
Epoch 10, Step 180, Loss 0.522007, Acc 0.826172
Epoch 10, Step 200, Loss 0.499444, Acc 0.828125
Epoch 10, Step 220, Loss 0.484623, Acc 0.837109
Epoch 10, Step 240, Loss 0.471074, Acc 0.835547
Epoch 10, Step 260, Loss 0.503162, Acc 0.826563
Epoch 10, Step 280, Loss 0.471645, Acc 0.846094
Epoch 10, Step 300, Loss 0.450638, Acc 0.846484
Epoch 10, Step 320, Loss 0.439053, Acc 0.851953
Epoch 10, Step 340, Loss 0.482043, Acc 0.833984
Epoch 10, Step 360, Loss 0.484722, Acc 0.827344
Epoch 10, Step 380, Loss 0.494961, Acc 0.823047
Reading file cifar-10-batches-py/data_batch_1
Reading file cifar-10-batches-py/data_batch_2
Reading file cifar-10-batches-py/data_batch_3
Reading file cifar-10-batches-py/data_batch_4
Reading file cifar-10-batches-py/data_batch_5
Epoch 11, Step 0, Loss 0.446823, Acc 0.852841
Epoch 11, Step 20, Loss 0.456949, Acc 0.840625
Epoch 11, Step 40, Loss 0.445955, Acc 0.842578
Epoch 11, Step 60, Loss 0.484455, Acc 0.828906
Epoch 11, Step 80, Loss 0.430885, Acc 0.857031
Epoch 11, Step 100, Loss 0.449789, Acc 0.846484
Epoch 11, Step 120, Loss 0.463586, Acc 0.841406
Epoch 11, Step 140, Loss 0.443231, Acc 0.844531
Epoch 11, Step 160, Loss 0.470603, Acc 0.842578
Epoch 11, Step 180, Loss 0.450399, Acc 0.846484
Epoch 11, Step 200, Loss 0.475154, Acc 0.837891
Epoch 11, Step 220, Loss 0.407791, Acc 0.858594
Epoch 11, Step 240, Loss 0.448022, Acc 0.847656
Epoch 11, Step 260, Loss 0.444676, Acc 0.845312
Epoch 11, Step 280, Loss 0.451156, Acc 0.843359
Epoch 11, Step 300, Loss 0.458283, Acc 0.840234
Epoch 11, Step 320, Loss 0.437371, Acc 0.851172
Epoch 11, Step 340, Loss 0.412547, Acc 0.862109
Epoch 11, Step 360, Loss 0.407413, Acc 0.859766
Epoch 11, Step 380, Loss 0.422333, Acc 0.852344
Reading file cifar-10-batches-py/data_batch_1
Reading file cifar-10-batches-py/data_batch_2
Reading file cifar-10-batches-py/data_batch_3
Reading file cifar-10-batches-py/data_batch_4
Reading file cifar-10-batches-py/data_batch_5
Epoch 12, Step 0, Loss 0.440087, Acc 0.849290
Epoch 12, Step 20, Loss 0.414874, Acc 0.856250
Epoch 12, Step 40, Loss 0.425026, Acc 0.852344
Epoch 12, Step 60, Loss 0.400751, Acc 0.862500
Epoch 12, Step 80, Loss 0.421488, Acc 0.855078
Epoch 12, Step 100, Loss 0.393819, Acc 0.865234
Epoch 12, Step 120, Loss 0.396206, Acc 0.866797
Epoch 12, Step 140, Loss 0.434174, Acc 0.847656
Epoch 12, Step 160, Loss 0.416802, Acc 0.852734
Epoch 12, Step 180, Loss 0.390549, Acc 0.863281
Epoch 12, Step 200, Loss 0.424427, Acc 0.865625
Epoch 12, Step 220, Loss 0.390423, Acc 0.869141
Epoch 12, Step 240, Loss 0.407088, Acc 0.861328
Epoch 12, Step 260, Loss 0.384563, Acc 0.870703
Epoch 12, Step 280, Loss 0.397529, Acc 0.865234
Epoch 12, Step 300, Loss 0.380041, Acc 0.870313
Epoch 12, Step 320, Loss 0.381130, Acc 0.870313
Epoch 12, Step 340, Loss 0.395238, Acc 0.866797
Epoch 12, Step 360, Loss 0.394560, Acc 0.863281
Epoch 12, Step 380, Loss 0.367769, Acc 0.881250
Reading file cifar-10-batches-py/test_batch
Test with epoch 12, Loss 0.324720, Acc 0.889735
Best acc 0.889735
Reading file cifar-10-batches-py/data_batch_1
Reading file cifar-10-batches-py/data_batch_2
Reading file cifar-10-batches-py/data_batch_3
Reading file cifar-10-batches-py/data_batch_4
Reading file cifar-10-batches-py/data_batch_5
Epoch 13, Step 0, Loss 0.367707, Acc 0.877131
Epoch 13, Step 20, Loss 0.395224, Acc 0.860547
Epoch 13, Step 40, Loss 0.390620, Acc 0.860547
Epoch 13, Step 60, Loss 0.368550, Acc 0.871094
Epoch 13, Step 80, Loss 0.384478, Acc 0.869922
Epoch 13, Step 100, Loss 0.377339, Acc 0.871094
Epoch 13, Step 120, Loss 0.362927, Acc 0.876172
Epoch 13, Step 140, Loss 0.389359, Acc 0.871484
Epoch 13, Step 160, Loss 0.373148, Acc 0.868750
Epoch 13, Step 180, Loss 0.374814, Acc 0.871094
Epoch 13, Step 200, Loss 0.383265, Acc 0.867188
Epoch 13, Step 220, Loss 0.394122, Acc 0.865234
Epoch 13, Step 240, Loss 0.375874, Acc 0.864453
Epoch 13, Step 260, Loss 0.344002, Acc 0.881250
Epoch 13, Step 280, Loss 0.347626, Acc 0.878906
Epoch 13, Step 300, Loss 0.344454, Acc 0.877344
Epoch 13, Step 320, Loss 0.371733, Acc 0.872266
Epoch 13, Step 340, Loss 0.333492, Acc 0.891406
Epoch 13, Step 360, Loss 0.346991, Acc 0.876172
Epoch 13, Step 380, Loss 0.366274, Acc 0.871094
Reading file cifar-10-batches-py/data_batch_1
Reading file cifar-10-batches-py/data_batch_2
Reading file cifar-10-batches-py/data_batch_3
Reading file cifar-10-batches-py/data_batch_4
Reading file cifar-10-batches-py/data_batch_5
Epoch 14, Step 0, Loss 0.367616, Acc 0.876278
Epoch 14, Step 20, Loss 0.395016, Acc 0.866406
Epoch 14, Step 40, Loss 0.365654, Acc 0.875000
Epoch 14, Step 60, Loss 0.363018, Acc 0.878125
Epoch 14, Step 80, Loss 0.368992, Acc 0.878516
Epoch 14, Step 100, Loss 0.351710, Acc 0.875000
Epoch 14, Step 120, Loss 0.345059, Acc 0.883984
Epoch 14, Step 140, Loss 0.358738, Acc 0.875391
Epoch 14, Step 160, Loss 0.353570, Acc 0.881641
Epoch 14, Step 180, Loss 0.331230, Acc 0.882422
Epoch 14, Step 200, Loss 0.343643, Acc 0.884375
Epoch 14, Step 220, Loss 0.354200, Acc 0.884375
Epoch 14, Step 240, Loss 0.372267, Acc 0.871484
Epoch 14, Step 260, Loss 0.347190, Acc 0.881641
Epoch 14, Step 280, Loss 0.364246, Acc 0.871875
Epoch 14, Step 300, Loss 0.333497, Acc 0.892187
Epoch 14, Step 320, Loss 0.364476, Acc 0.873438
Epoch 14, Step 340, Loss 0.354703, Acc 0.873047
Epoch 14, Step 360, Loss 0.375735, Acc 0.869141
Epoch 14, Step 380, Loss 0.361984, Acc 0.875781
Reading file cifar-10-batches-py/test_batch
Test with epoch 14, Loss 0.311640, Acc 0.894877
Best acc 0.894877
 

使用train_hinas_res.py --model=m_id来训练带有skip link的HiNAS 3-5号网络模型,model_id的取值和实际对应关系为:0,1,2分别代表3,4,5

In[3]
!python train_hinas_res.py --model=0
learning rate: 0.100000 -> 0.000100, cosine annealing
epoch: 200
batch size: 128
L2 decay: 0.000400
Token is 1,10,2,3,2,10,5,1,1,11,8,2,2,1,5,2,9,3,0,9,2,2,4,3,2,2,1,2,9,5


conv_1x1 	-> shape (-1L, 64L, 32L, 32L)
avgpool_2x2 	-> shape (-1L, 64L, 32L, 32L)
------------
conv_1x1 	-> shape (-1L, 64L, 32L, 32L)
dilated_2x2 	-> shape (-1L, 64L, 32L, 32L)
------------
conv_1x1 	-> shape (-1L, 64L, 32L, 32L)
avgpool_2x2 	-> shape (-1L, 64L, 32L, 32L)
------------
conv_1x1 	-> shape (-1L, 64L, 32L, 32L)
conv_2x2 	-> shape (-1L, 64L, 32L, 32L)
------------
conv_1x1 	-> shape (-1L, 64L, 32L, 32L)
avgpool_3x3 	-> shape (-1L, 64L, 32L, 32L)
============
conv_1x1 	-> shape (-1L, 128L, 16L, 16L)
conv_3x3 	-> shape (-1L, 128L, 16L, 16L)
------------
conv_1x1 	-> shape (-1L, 128L, 16L, 16L)
conv_2x2 	-> shape (-1L, 128L, 16L, 16L)
------------
conv_1x1 	-> shape (-1L, 128L, 16L, 16L)
conv_3x3 	-> shape (-1L, 128L, 16L, 16L)
------------
conv_1x1 	-> shape (-1L, 128L, 16L, 16L)
dilated_2x2 	-> shape (-1L, 128L, 16L, 16L)
------------
conv_1x1 	-> shape (-1L, 128L, 16L, 16L)
maxpool_3x3 	-> shape (-1L, 128L, 16L, 16L)
============
conv_1x1 	-> shape (-1L, 256L, 8L, 8L)
conv_3x3 	-> shape (-1L, 256L, 8L, 8L)
------------
conv_1x1 	-> shape (-1L, 256L, 8L, 8L)
dilated_2x2 	-> shape (-1L, 256L, 8L, 8L)
------------
conv_1x1 	-> shape (-1L, 256L, 8L, 8L)
conv_3x3 	-> shape (-1L, 256L, 8L, 8L)
------------
conv_1x1 	-> shape (-1L, 256L, 8L, 8L)
conv_3x3 	-> shape (-1L, 256L, 8L, 8L)
------------
conv_1x1 	-> shape (-1L, 256L, 8L, 8L)
conv_1x3_3x1 	-> shape (-1L, 256L, 8L, 8L)
============
W0722 15:16:21.008813   133 device_context.cc:259] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 9.2, Runtime API Version: 9.0
W0722 15:16:21.012676   133 device_context.cc:267] device: 0, cuDNN Version: 7.3.
Reading file cifar-10-batches-py/data_batch_1
Reading file cifar-10-batches-py/data_batch_2
Reading file cifar-10-batches-py/data_batch_3
Reading file cifar-10-batches-py/data_batch_4
Reading file cifar-10-batches-py/data_batch_5
Epoch 0, Step 0, Loss 2.322999, Acc 0.117188
Epoch 0, Step 20, Loss 2.117482, Acc 0.226562
Epoch 0, Step 40, Loss 1.840883, Acc 0.306250
Epoch 0, Step 60, Loss 1.738423, Acc 0.362109
Epoch 0, Step 80, Loss 1.645888, Acc 0.395312
Epoch 0, Step 100, Loss 1.585704, Acc 0.421094
Epoch 0, Step 120, Loss 1.575156, Acc 0.425781
Epoch 0, Step 140, Loss 1.480760, Acc 0.458203
Epoch 0, Step 160, Loss 1.440411, Acc 0.489453
Epoch 0, Step 180, Loss 1.412331, Acc 0.496875
Epoch 0, Step 200, Loss 1.388680, Acc 0.493750
Epoch 0, Step 220, Loss 1.342945, Acc 0.515625
Epoch 0, Step 240, Loss 1.303435, Acc 0.526172
Epoch 0, Step 260, Loss 1.274386, Acc 0.549609
Epoch 0, Step 280, Loss 1.257554, Acc 0.557031
Epoch 0, Step 300, Loss 1.232829, Acc 0.552734
Epoch 0, Step 320, Loss 1.183153, Acc 0.579297
Epoch 0, Step 340, Loss 1.189723, Acc 0.578125
Epoch 0, Step 360, Loss 1.183041, Acc 0.573438
Epoch 0, Step 380, Loss 1.137142, Acc 0.594922
^C
Traceback (most recent call last):
  File "train_hinas_res.py", line 44, in <module>
    app.run(main)
  File "/opt/conda/envs/python27-paddle120-env/lib/python2.7/site-packages/absl/app.py", line 300, in run
    _run_main(main, args)
  File "/opt/conda/envs/python27-paddle120-env/lib/python2.7/site-packages/absl/app.py", line 251, in _run_main
    sys.exit(main(argv))
  File "train_hinas_res.py", line 40, in main
    model.run()
  File "/home/aistudio/nn_paddle.py", line 139, in run
    feed_order=['pixel', 'label'])
  File "/opt/conda/envs/python27-paddle120-env/lib/python2.7/site-packages/paddle/fluid/contrib/trainer.py", line 405, in train
    feed_order)
  File "/opt/conda/envs/python27-paddle120-env/lib/python2.7/site-packages/paddle/fluid/contrib/trainer.py", line 483, in _train_by_executor
    self._train_by_any_executor(event_handler, exe, num_epochs, reader)
  File "/opt/conda/envs/python27-paddle120-env/lib/python2.7/site-packages/paddle/fluid/contrib/trainer.py", line 520, in _train_by_any_executor
    event_handler(EndEpochEvent(epoch_id))
  File "/home/aistudio/nn_paddle.py", line 120, in event_handler
    reader=test_reader, feed_order=['pixel', 'label'])
  File "/opt/conda/envs/python27-paddle120-env/lib/python2.7/site-packages/paddle/fluid/contrib/trainer.py", line 418, in test
    self.train_func_outputs)
  File "/opt/conda/envs/python27-paddle120-env/lib/python2.7/site-packages/paddle/fluid/contrib/trainer.py", line 532, in _test_by_executor
    for data in reader():
  File "/opt/conda/envs/python27-paddle120-env/lib/python2.7/site-packages/paddle/batch.py", line 35, in batch_reader
    for instance in r:
  File "/home/aistudio/reader.py", line 109, in reader
    each_item.name for each_item in f if sub_name in each_item.name
  File "/opt/conda/envs/python27-paddle120-env/lib/python2.7/tarfile.py", line 2510, in next
    tarinfo = self.tarfile.next()
  File "/opt/conda/envs/python27-paddle120-env/lib/python2.7/tarfile.py", line 2350, in next
    self.fileobj.seek(self.offset - 1)
  File "/opt/conda/envs/python27-paddle120-env/lib/python2.7/gzip.py", line 443, in seek
    self.read(1024)
  File "/opt/conda/envs/python27-paddle120-env/lib/python2.7/gzip.py", line 268, in read
    self._read(readsize)
  File "/opt/conda/envs/python27-paddle120-env/lib/python2.7/gzip.py", line 319, in _read
    uncompress = self.decompress.decompress(buf)
KeyboardInterrupt
In[27]
!python infer.py
(1, 3, 32, 32)
('label_index:', 1)
 

此外,train_hinas.py和train_hinas_res.py 都支持以下参数:

初始化部分:

  • random_flip_left_right:图片随机水平翻转(Default:True)
  • random_flip_up_down:图片随机垂直翻转(Default:False)
  • cutout:图片随机遮挡(Default:True)
  • standardize_image:对图片每个像素做 standardize(Default:True)
  • pad_and_cut_image:图片随机padding,并裁剪回原大小(Default:True)
  • shuffle_image:训练时对输入图片的顺序做shuffle(Default:True)
  • lr_max:训练开始时的learning rate(Default:0.1)
  • lr_min:训练结束时的learning rate(Default:0.0001)
  • batch_size:训练的batch size(Default:128)
  • num_epochs:训练总的epoch(Default:200)
  • weight_decay:训练时L2 Regularization大小(Default:0.0004)
  • momentum:momentum优化器中的momentum系数(Default:0.9)
  • dropout_rate:dropout层的dropout_rate(Default:0.5)
  • bn_decay:batch norm层的decay/momentum系数(即moving average decay)大小(Default:0.9)

点击链接,使用AI Studio一键上手实践项目吧:https://aistudio.baidu.com/aistudio/projectdetail/122279 

下载安装命令

## CPU版本安装命令
pip install -f https://paddlepaddle.org.cn/pip/oschina/cpu paddlepaddle

## GPU版本安装命令
pip install -f https://paddlepaddle.org.cn/pip/oschina/gpu paddlepaddle-gpu

>> 访问 PaddlePaddle 官网,了解更多相关内容

展开阅读全文
打赏
0
0 收藏
分享
加载中
更多评论
打赏
0 评论
0 收藏
0
分享
返回顶部
顶部