文档章节

Tensorflow入门笔记

陆幽轩
 陆幽轩
发布于 2017/06/27 11:50
字数 1188
阅读 7
收藏 0

Tensorflow入门笔记

Tensorflow原理

背景: Python和C++来回切换会造成巨大开销。

To do efficient numerical computing in Python, we typically use libraries like NumPythat do expensive operations such as matrix multiplication outside Python, using highly efficient code implemented in another language. Unfortunately, there can still be a lot of overhead from switching back to Python every operation. This overhead is especially bad if you want to run computations on GPUs or in a distributed manner, where there can be a high cost to transferring data.

解决方案: 利用Python基于Graph定义所有运算,然后让这些一次性在Python外完成这些运算。

TensorFlow also does its heavy lifting outside Python, but it tahttps://www.tensorflow.org/get_started/mnist/proskes things a step further to avoid this overhead. Instead of running a single expensive operation independently from Python, TensorFlow lets us describe a graph of interacting operations that run entirely outside Python. This approach is similar to that used in Theano or Torch.

The role of the Python code is therefore to build this external computation graph, and to dictate which parts of the computation graph should be run. See the Computation Graph section of Getting Started With TensorFlow for more detail.

Tensorflow常用API说明

Session

TensorFlow relies on a highly efficient C++ backend to do its computation. The connection to this backend is called a session.The common usage for TensorFlow programs is to first create a graph and then launch it in a session.

总结: 用于和C++高性能计算模块会话的类

在入门教程中,我们使用

import tensorflow as tf
sess = tf.InteractiveSession()

Tensor

中文名称张量,可以查看知乎上关于这个问题的解释:什么是张量。实际上可以将其理解为一个矩阵,Tensorflow中的基本单位

查看以下代码:

import tensorflow as tf
# What is Tensor?
ta = [0,0,0,0];
ta[0] = tf.placeholder(tf.float32,[None,784])
ta[1] = tf.zeros([5,5],tf.float32)
print (ta)

输出以下结果:

/usr/bin/python2.7 /home/maoyiwei/桌面/Tensorflow/playground/play.py
[<tf.Tensor 'Placeholder:0' shape=(?, 784) dtype=float32>, <tf.Tensor 'zeros:0' shape=(5, 5) dtype=float32>, 0, 0]

Placeholder

可以理解为用于存储输入数据(训练数据)的Tensor。格式如下:

placeholder( dtype, shape=None, name=None)

x = tf.placeholder(tf.float32, shape=(1024, 1024))

Variables

字面意思。在Tensorflow中意义如下:

A Variable is a value that lives in TensorFlow's computation graph. It can be used and even modified by the computation. In machine learning applications, one generally has the model parameters beVariables.

W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))

Variable要进行初始化,步骤如下:

Before Variables can be used within a session, they must be initialized using that session. This step takes the initial values (in this case tensors full of zeros) that have already been specified, and assigns them to each Variable. This can be done for all Variables at once:

sess.run(tf.global_variables_initializer())

tf.matmul(x,W)

矩阵相乘(x*W):详细看文档:

Matmul(a,b) Return:

A Tensor of the same type as a and b where each inner-most matrix is the product of the corresponding matrices in a and b, e.g. if all transpose or adjoint attributes are False:

output[..., i, j] = sum_k (a[..., i, k] * b[..., k, j]), for all indices i, j.

tf.reduce_XXX

查看文档,解释如下:

Computes the XXXX of elements across dimensions of a tensor.

主要参数如下:

reduce_mean(
    input_tensor, # 输入的tensor
    axis=None, # 维度
    # keep_dims=False,
    # name=None,
    # reduction_indices=None
)
  • input_tensor: The tensor to reduce. Should have numeric type.
  • axis: The dimensions to reduce. If None (the default), reduces all dimensions.

举例说明:

# 'x' is [[1., 2.]
#         [3., 4.]]

tf.reduce_mean(x) ==> 2.5 #如果不指定第二个参数,那么就在所有的元素中取平均值
tf.reduce_mean(x, 0) ==> [2.,  3.] #指定第二个参数为0,则第一维的元素取平均值,即每一列求平均值
tf.reduce_mean(x, 1) ==> [1.5,  3.5] #指定第二个参数为1,则第二维的元素取平均值,即每一行求平均值

常用的API如下:

  • reduce_mean 平均值
  • reduce_max 最大值
  • reduce_min 最小值
  • reduce_sum 求和

为什么要命名Reduce呢? Stackoverflow上对这个问题的解释为:

Reduce is just a name for a family of operations which are used to create a single object from the sequence of objects, repeatedly applying the same binary operation.

tf.nn

一些激活函数、卷积函数等,源代码中注释如下:

"""## Activation Functions

The activation ops provide different types of nonlinearities for use in neural
networks.  These include smooth nonlinearities (`sigmoid`, `tanh`, `elu`,
`softplus`, and `softsign`), continuous but not everywhere differentiable
functions (`relu`, `relu6`, and `relu_x`), and random regularization
(`dropout`).

tf.train

训练方法(训练损失函数)。直接上代码理解会更好一点。

# define a math model
print('make model')
# 占位符(你的数据)
x = tf.placeholder(tf.float32,[None,784])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x,W) + b)

# train it
print('train it')
# 占位符(预测数据)
y_ = tf.placeholder(tf.float32,[None,10])
# 计算交叉熵
cross_entropy = tf.reduce_mean(-tf.reduce_sum( y_*tf.log(y),reduction_indices=[1]))
# 使用梯度下降法
train_step = tf.train.GradientDescentOptimizer(0.55).minimize(cross_entropy)
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
for _ in range(1000):
	# print('抓取100个随机数据训练')
  	batch_xs, batch_ys = mnist.train.next_batch(100)
	# print(x,y)
  	sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})

这个feed_dict和placeholder相互对应。

记住这两句话:

train_step = tf.train.GradientDescentOptimizer(0.55).minimize(cross_entropy)

sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})

额外说明一下sess.run。可以传入tf.train或者tensor,如下面评价模型就是输入tensor的例子,此时sess.run返回tensor的计算结果。

# Evaluating our Model
print('start to evaluate')
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))

其中tf.cast用于数据转换。

Addition

另外找一个很好玩的网站Tinker With a Neural Network in Your Browser.

© 著作权归作者所有

共有 人打赏支持
陆幽轩
粉丝 2
博文 7
码字总数 9951
作品 0
杭州
程序员
史上最全TensorFlow学习资源汇总

来源 悦动智能(公众号ID:aibbtcom) 本篇文章将为大家总结TensorFlow纯干货学习资源,非常适合新手学习,建议大家收藏。 ▌一 、TensorFlow教程资源 1)适合初学者的TensorFlow教程和代码示...

悦动智能
04/12
0
0
【干货】史上最全的Tensorflow学习资源汇总,速藏!

一 、Tensorflow教程资源: 1)适合初学者的Tensorflow教程和代码示例:(https://github.com/aymericdamien/TensorFlow-Examples)该教程不光提供了一些经典的数据集,更是从实现最简单的“Hel...

技术小能手
04/16
0
0
机器学习Tensorflow笔记4:iOS通过Core ML使用Tensorflow训练模型

Tensorflow是Google推出的人工智能框架,而Core ML是苹果推出的人工智能框架,两者是有很大的区别,其中Tensorflow是包含了训练模型和评估模型,Core ML只支持在设备上评估模型,不能训练模型...

ImWiki
05/16
0
0
cnn卷积神经网络及其tensorflow的一些资源汇总

ccn原理的理解 深度学习Deep Learning(01)_CNN卷积神经网络 再看CNN中的卷积 这两篇文章里推荐的资源也很好: CNN(卷积神经网络)是什么?有入门简介或文章吗? CS231n课程笔记翻译:卷积神...

firing00
04/13
0
0
1- OpenCV+TensorFlow 入门人工智能图像处理-课程介绍

人工智能最火的两个方向,自然语言处理和计算机视觉 OpenCV的图像处理 TensorFlow的使用 供需关系理论,有需求所以才有提供 招聘网站: 图像算法两万以上 都需要的技能: OpenCV TensorFlow 人...

天涯明月笙
04/04
0
0

没有更多内容

加载失败,请刷新页面

加载更多

Confluence 6 识别慢性能的宏

Page Profiling 给你了有关页面在载入的时候操作缓慢的邪教,你可以将下面的内容添加到调试(debug)级别: Version 3.1 及其后续版本 设置包名字为 com.atlassian.renderer.v2.components.M...

honeymose
8分钟前
0
0
day93-20180920-英语流利阅读-待学习

时尚之觞:外表光鲜靓丽,其实穷得要命 Lala 2018-09-20 1.今日导读 讲到时尚界,我们脑海里浮现的可能都是模特和设计师光鲜靓丽、从容潇洒的模样。可是,最近在法国出版的一本书却颠覆了我们...

飞鱼说编程
24分钟前
0
0
maven的pom.xml用解决版本问题

maven管理库依赖,有个好处就是连同库的依赖的全部jar文件一起下载,免去手工添加的麻烦,但同时也带来了同一个jar会被下载了不同版本的问题,好在pom的配置里面允许用<exclusion>来排除一些...

JAVA码猿
47分钟前
1
0
20180920 rzsz传输文件、用户和用户组相关配置文件与管理

利用rz、sz实现Linux与Windows互传文件 [root@centos01 ~]# yum install -y lrzsz # 安装工具sz test.txt # 弹出对话框,传递到选择的路径下rz # 回车后,会从对话框中选择对应的文件传递...

野雪球
今天
2
0
OSChina 周四乱弹 —— 毒蛇当辣条

Osc乱弹歌单(2018)请戳(这里) 【今日歌曲】 @ 达尔文:分享花澤香菜/前野智昭/小野大輔/井上喜久子的单曲《ミッション! 健?康?第?イチ》 《ミッション! 健?康?第?イチ》- 花澤香菜/前野智...

小小编辑
今天
40
9

没有更多内容

加载失败,请刷新页面

加载更多

返回顶部
顶部