文档章节

Tensorflow入门笔记

陆幽轩
 陆幽轩
发布于 2017/06/27 11:50
字数 1188
阅读 10
收藏 0

Tensorflow入门笔记

Tensorflow原理

背景: Python和C++来回切换会造成巨大开销。

To do efficient numerical computing in Python, we typically use libraries like NumPythat do expensive operations such as matrix multiplication outside Python, using highly efficient code implemented in another language. Unfortunately, there can still be a lot of overhead from switching back to Python every operation. This overhead is especially bad if you want to run computations on GPUs or in a distributed manner, where there can be a high cost to transferring data.

解决方案: 利用Python基于Graph定义所有运算,然后让这些一次性在Python外完成这些运算。

TensorFlow also does its heavy lifting outside Python, but it tahttps://www.tensorflow.org/get_started/mnist/proskes things a step further to avoid this overhead. Instead of running a single expensive operation independently from Python, TensorFlow lets us describe a graph of interacting operations that run entirely outside Python. This approach is similar to that used in Theano or Torch.

The role of the Python code is therefore to build this external computation graph, and to dictate which parts of the computation graph should be run. See the Computation Graph section of Getting Started With TensorFlow for more detail.

Tensorflow常用API说明

Session

TensorFlow relies on a highly efficient C++ backend to do its computation. The connection to this backend is called a session.The common usage for TensorFlow programs is to first create a graph and then launch it in a session.

总结: 用于和C++高性能计算模块会话的类

在入门教程中,我们使用

import tensorflow as tf
sess = tf.InteractiveSession()

Tensor

中文名称张量,可以查看知乎上关于这个问题的解释:什么是张量。实际上可以将其理解为一个矩阵,Tensorflow中的基本单位

查看以下代码:

import tensorflow as tf
# What is Tensor?
ta = [0,0,0,0];
ta[0] = tf.placeholder(tf.float32,[None,784])
ta[1] = tf.zeros([5,5],tf.float32)
print (ta)

输出以下结果:

/usr/bin/python2.7 /home/maoyiwei/桌面/Tensorflow/playground/play.py
[<tf.Tensor 'Placeholder:0' shape=(?, 784) dtype=float32>, <tf.Tensor 'zeros:0' shape=(5, 5) dtype=float32>, 0, 0]

Placeholder

可以理解为用于存储输入数据(训练数据)的Tensor。格式如下:

placeholder( dtype, shape=None, name=None)

x = tf.placeholder(tf.float32, shape=(1024, 1024))

Variables

字面意思。在Tensorflow中意义如下:

A Variable is a value that lives in TensorFlow's computation graph. It can be used and even modified by the computation. In machine learning applications, one generally has the model parameters beVariables.

W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))

Variable要进行初始化,步骤如下:

Before Variables can be used within a session, they must be initialized using that session. This step takes the initial values (in this case tensors full of zeros) that have already been specified, and assigns them to each Variable. This can be done for all Variables at once:

sess.run(tf.global_variables_initializer())

tf.matmul(x,W)

矩阵相乘(x*W):详细看文档:

Matmul(a,b) Return:

A Tensor of the same type as a and b where each inner-most matrix is the product of the corresponding matrices in a and b, e.g. if all transpose or adjoint attributes are False:

output[..., i, j] = sum_k (a[..., i, k] * b[..., k, j]), for all indices i, j.

tf.reduce_XXX

查看文档,解释如下:

Computes the XXXX of elements across dimensions of a tensor.

主要参数如下:

reduce_mean(
    input_tensor, # 输入的tensor
    axis=None, # 维度
    # keep_dims=False,
    # name=None,
    # reduction_indices=None
)
  • input_tensor: The tensor to reduce. Should have numeric type.
  • axis: The dimensions to reduce. If None (the default), reduces all dimensions.

举例说明:

# 'x' is [[1., 2.]
#         [3., 4.]]

tf.reduce_mean(x) ==> 2.5 #如果不指定第二个参数,那么就在所有的元素中取平均值
tf.reduce_mean(x, 0) ==> [2.,  3.] #指定第二个参数为0,则第一维的元素取平均值,即每一列求平均值
tf.reduce_mean(x, 1) ==> [1.5,  3.5] #指定第二个参数为1,则第二维的元素取平均值,即每一行求平均值

常用的API如下:

  • reduce_mean 平均值
  • reduce_max 最大值
  • reduce_min 最小值
  • reduce_sum 求和

为什么要命名Reduce呢? Stackoverflow上对这个问题的解释为:

Reduce is just a name for a family of operations which are used to create a single object from the sequence of objects, repeatedly applying the same binary operation.

tf.nn

一些激活函数、卷积函数等,源代码中注释如下:

"""## Activation Functions

The activation ops provide different types of nonlinearities for use in neural
networks.  These include smooth nonlinearities (`sigmoid`, `tanh`, `elu`,
`softplus`, and `softsign`), continuous but not everywhere differentiable
functions (`relu`, `relu6`, and `relu_x`), and random regularization
(`dropout`).

tf.train

训练方法(训练损失函数)。直接上代码理解会更好一点。

# define a math model
print('make model')
# 占位符(你的数据)
x = tf.placeholder(tf.float32,[None,784])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x,W) + b)

# train it
print('train it')
# 占位符(预测数据)
y_ = tf.placeholder(tf.float32,[None,10])
# 计算交叉熵
cross_entropy = tf.reduce_mean(-tf.reduce_sum( y_*tf.log(y),reduction_indices=[1]))
# 使用梯度下降法
train_step = tf.train.GradientDescentOptimizer(0.55).minimize(cross_entropy)
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
for _ in range(1000):
	# print('抓取100个随机数据训练')
  	batch_xs, batch_ys = mnist.train.next_batch(100)
	# print(x,y)
  	sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})

这个feed_dict和placeholder相互对应。

记住这两句话:

train_step = tf.train.GradientDescentOptimizer(0.55).minimize(cross_entropy)

sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})

额外说明一下sess.run。可以传入tf.train或者tensor,如下面评价模型就是输入tensor的例子,此时sess.run返回tensor的计算结果。

# Evaluating our Model
print('start to evaluate')
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))

其中tf.cast用于数据转换。

Addition

另外找一个很好玩的网站Tinker With a Neural Network in Your Browser.

© 著作权归作者所有

共有 人打赏支持
陆幽轩
粉丝 2
博文 7
码字总数 9951
作品 0
杭州
程序员
私信 提问
史上最全TensorFlow学习资源汇总

来源 悦动智能(公众号ID:aibbtcom) 本篇文章将为大家总结TensorFlow纯干货学习资源,非常适合新手学习,建议大家收藏。 ▌一 、TensorFlow教程资源 1)适合初学者的TensorFlow教程和代码示...

悦动智能
04/12
0
0
cnn卷积神经网络及其tensorflow的一些资源汇总

ccn原理的理解 深度学习Deep Learning(01)_CNN卷积神经网络 再看CNN中的卷积 这两篇文章里推荐的资源也很好: CNN(卷积神经网络)是什么?有入门简介或文章吗? CS231n课程笔记翻译:卷积神...

firing00
04/13
0
0
【干货】史上最全的Tensorflow学习资源汇总,速藏!

一 、Tensorflow教程资源: 1)适合初学者的Tensorflow教程和代码示例:(https://github.com/aymericdamien/TensorFlow-Examples)该教程不光提供了一些经典的数据集,更是从实现最简单的“Hel...

技术小能手
04/16
0
0
机器学习Tensorflow笔记4:iOS通过Core ML使用Tensorflow训练模型

Tensorflow是Google推出的人工智能框架,而Core ML是苹果推出的人工智能框架,两者是有很大的区别,其中Tensorflow是包含了训练模型和评估模型,Core ML只支持在设备上评估模型,不能训练模型...

ImWiki
05/16
0
0
tensorflow学习笔记(第一天)-MNIST机器学习入门

MNIST机器学习入门 这个是关于tensorflow的中文文档:http://wiki.jikexueyuan.com/project/tensorflow-zh/tutorials/mnist_beginners.html MNIST是一个入门级的计算机视觉数据集,这个就相当...

a870542373
04/13
0
0

没有更多内容

加载失败,请刷新页面

加载更多

centos7 部署Apache服务器

centos7 部署Apache服务器 置顶 2017年09月05日 09:12:49 师太,老衲把持不住了 阅读数:19700 飞翔科技 2017-09-04 16:24 Apache程序是目前拥有很高市场占有率的Web服务程序之一,其跨平台和...

linjin200
28分钟前
1
0
CENTOS7 搭建文件服务器:samba共享linux文件夹

一、安装samba: sudo yum install samba 二、配置samba共享目录 sudo vi /etc/samba/smb.conf [rpi_web_notebooks] comment = 树莓派jupyter notebook目录 path = /home/......

mbzhong
38分钟前
2
0
解析Nuxt.js Vue服务端渲染摸索

本篇文章主要介绍了详解Nuxt.js Vue服务端渲染摸索,写的十分的全面细致,具有一定的参考价值,对此有需要的朋友可以参考学习下。如有不足之处,欢迎批评指正。 Nuxt.js 十分简单易用。一个简...

前端攻城老湿
48分钟前
4
0
深入解析React中的元素、组件、实例和节点

React 深入系列,深入讲解了React中的重点概念、特性和模式等,旨在帮助大家加深对React的理解,以及在项目中更加灵活地使用React。 React 中的元素、组件、实例和节点,是React中关系密切的...

前端攻城小牛
今天
5
0
菜鸟网络三面面经(java开发岗):Spring boot+JVM+线程池+中间件

一面 1、HaspMap底层原理?HaspTable和ConcurrentHashMap他们之间的相同点和不同点? 2、由上题提到锁的问题 3、MySQL的表锁&行锁&乐观锁&悲观锁,各自的使用场景 4、Java线程锁有哪些,各自的...

别打我会飞
今天
7
0

没有更多内容

加载失败,请刷新页面

加载更多

返回顶部
顶部