陆幽轩

# Tensorflow入门笔记

## Tensorflow原理

To do efficient numerical computing in Python, we typically use libraries like NumPythat do expensive operations such as matrix multiplication outside Python, using highly efficient code implemented in another language. Unfortunately, there can still be a lot of overhead from switching back to Python every operation. This overhead is especially bad if you want to run computations on GPUs or in a distributed manner, where there can be a high cost to transferring data.

TensorFlow also does its heavy lifting outside Python, but it tahttps://www.tensorflow.org/get_started/mnist/proskes things a step further to avoid this overhead. Instead of running a single expensive operation independently from Python, TensorFlow lets us describe a graph of interacting operations that run entirely outside Python. This approach is similar to that used in Theano or Torch.

The role of the Python code is therefore to build this external computation graph, and to dictate which parts of the computation graph should be run. See the Computation Graph section of Getting Started With TensorFlow for more detail.

## Tensorflow常用API说明

### Session

TensorFlow relies on a highly efficient C++ backend to do its computation. The connection to this backend is called a session.The common usage for TensorFlow programs is to first create a graph and then launch it in a session.

``````import tensorflow as tf
sess = tf.InteractiveSession()
``````

### Tensor

``````import tensorflow as tf
# What is Tensor?
ta = [0,0,0,0];
ta[0] = tf.placeholder(tf.float32,[None,784])
ta[1] = tf.zeros([5,5],tf.float32)
print (ta)
``````

``````/usr/bin/python2.7 /home/maoyiwei/桌面/Tensorflow/playground/play.py
[<tf.Tensor 'Placeholder:0' shape=(?, 784) dtype=float32>, <tf.Tensor 'zeros:0' shape=(5, 5) dtype=float32>, 0, 0]
``````

### Placeholder

placeholder( dtype, shape=None, name=None)

``````x = tf.placeholder(tf.float32, shape=(1024, 1024))
``````

### Variables

A `Variable` is a value that lives in TensorFlow's computation graph. It can be used and even modified by the computation. In machine learning applications, one generally has the model parameters be`Variable`s.

``````W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
``````

Variable要进行初始化，步骤如下：

Before `Variable`s can be used within a session, they must be initialized using that session. This step takes the initial values (in this case tensors full of zeros) that have already been specified, and assigns them to each `Variable`. This can be done for all `Variables` at once:

``````sess.run(tf.global_variables_initializer())
``````

### tf.matmul(x,W)

Matmul(a,b) Return:

A Tensor of the same type as a and b where each inner-most matrix is the product of the corresponding matrices in a and b, e.g. if all transpose or adjoint attributes are False:

output[..., i, j] = sum_k (a[..., i, k] * b[..., k, j]), for all indices i, j.

### tf.reduce_XXX

Computes the XXXX of elements across dimensions of a tensor.

``````reduce_mean(
input_tensor, # 输入的tensor
axis=None, # 维度
# keep_dims=False,
# name=None,
# reduction_indices=None
)
``````
• input_tensor: The tensor to reduce. Should have numeric type.
• axis: The dimensions to reduce. If `None` (the default), reduces all dimensions.

``````# 'x' is [[1., 2.]
#         [3., 4.]]

tf.reduce_mean(x) ==> 2.5 #如果不指定第二个参数，那么就在所有的元素中取平均值
tf.reduce_mean(x, 0) ==> [2.,  3.] #指定第二个参数为0，则第一维的元素取平均值，即每一列求平均值
tf.reduce_mean(x, 1) ==> [1.5,  3.5] #指定第二个参数为1，则第二维的元素取平均值，即每一行求平均值
``````

• reduce_mean 平均值
• reduce_max 最大值
• reduce_min 最小值
• reduce_sum 求和

Reduce is just a name for a family of operations which are used to create a single object from the sequence of objects, repeatedly applying the same binary operation.

### tf.nn

``````"""## Activation Functions

The activation ops provide different types of nonlinearities for use in neural
networks.  These include smooth nonlinearities (`sigmoid`, `tanh`, `elu`,
`softplus`, and `softsign`), continuous but not everywhere differentiable
functions (`relu`, `relu6`, and `relu_x`), and random regularization
(`dropout`).
``````

### tf.train

``````# define a math model
print('make model')
# 占位符（你的数据）
x = tf.placeholder(tf.float32,[None,784])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x,W) + b)

# train it
print('train it')
# 占位符（预测数据）
y_ = tf.placeholder(tf.float32,[None,10])
# 计算交叉熵
cross_entropy = tf.reduce_mean(-tf.reduce_sum( y_*tf.log(y),reduction_indices=[1]))
# 使用梯度下降法
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
for _ in range(1000):
# print('抓取100个随机数据训练')
batch_xs, batch_ys = mnist.train.next_batch(100)
# print(x,y)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
``````

``````train_step = tf.train.GradientDescentOptimizer(0.55).minimize(cross_entropy)

sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
``````

``````# Evaluating our Model
print('start to evaluate')
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
``````

### 陆幽轩

04/12
0
0
cnn卷积神经网络及其tensorflow的一些资源汇总

ccn原理的理解 深度学习Deep Learning（01）_CNN卷积神经网络 再看CNN中的卷积 这两篇文章里推荐的资源也很好： CNN(卷积神经网络)是什么？有入门简介或文章吗？ CS231n课程笔记翻译：卷积神...

firing00
04/13
0
0
【干货】史上最全的Tensorflow学习资源汇总，速藏！

04/16
0
0

ImWiki
05/16
0
0
tensorflow学习笔记（第一天）-MNIST机器学习入门

MNIST机器学习入门 这个是关于tensorflow的中文文档：http://wiki.jikexueyuan.com/project/tensorflow-zh/tutorials/mnist_beginners.html MNIST是一个入门级的计算机视觉数据集，这个就相当...

a870542373
04/13
0
0

centos7 部署Apache服务器

centos7 部署Apache服务器 置顶 2017年09月05日 09:12:49 师太，老衲把持不住了 阅读数：19700 飞翔科技 2017-09-04 16:24 Apache程序是目前拥有很高市场占有率的Web服务程序之一，其跨平台和...

linjin200
28分钟前
1
0
CENTOS7 搭建文件服务器:samba共享linux文件夹

mbzhong
38分钟前
2
0

48分钟前
4
0

React 深入系列，深入讲解了React中的重点概念、特性和模式等，旨在帮助大家加深对React的理解，以及在项目中更加灵活地使用React。 React 中的元素、组件、实例和节点，是React中关系密切的...

5
0

7
0