Tensorflow1.x 与 Tensorflow2.0 的区别

TF1.x的历史背景

TensorFlow 1.x 主要用于处理「静态计算图」的框架。计算图中的节点是 Tensors,当图形运行时,它将保持n维数组;图中的边表示在运行图以实际执行有用计算时将在张量上运行的函数。
在 TensorFlow 2.0 之前,我们必须将图表分为两个阶段:

  1. 构建一个描述你要执行的计算的计算图。这个阶段实际上不执行任何计算;它只是建立来计算的符号表示。该阶段通常将定义一个或多个表示计算图输入的“占位符”(placeholder)对象。
  2. 多次运行计算图。每次运行图形时(例如,对于一个梯度下降步骤),你将指定要计算的图形的哪些部分,并传递一个“feed_dict”字典,该字典将给出具体值为图中的任何“占位符”。

TF2.0中的新范例

使用 Tensorflow 2.0,我们可以简单地才用“更像python”的功能形式,与 PyTorch 和 Numpy 操作直接相似。而不是带有计算图的两步范例,使其(除其他事项外)更容易调试 TF 代码。详细信息

TF 1.x 和 2.0 方法的主要区别在于 2.0 方法不使用 tf.Sessiontf.runplaceholderfeed_dict
两个版本之间的不同之处以及两者之间进行转换的详细信息

一个简单的例子:flatten功能

TF1.x

def flatten(x):
    """
    Input:
    - TensorFlow Tensor of shape (N, D1, ..., DM)

    Output:
    - TensorFlow Tensor of shape (N, D1 * ... * DM)
    """
    N = tf.shape(x)[0]
    return tf.reshape(x, (N, -1))
def test_flatten():
    # Clear the current TensorFlow graph.
    tf.reset_default_graph()

    # Stage I: Define the TensorFlow graph describing our computation.
    # In this case the computation is trivial: we just want to flatten
    # a Tensor using the flatten function defined above.

    # Our computation will have a single input, x. We don't know its
    # value yet, so we define a placeholder which will hold the value
    # when the graph is run. We then pass this placeholder Tensor to
    # the flatten function; this gives us a new Tensor which will hold
    # a flattened view of x when the graph is run. The tf.device
    # context manager tells TensorFlow whether to place these Tensors
    # on CPU or GPU.
    with tf.device(device):
        x = tf.placeholder(tf.float32)
        x_flat = flatten(x)

    # At this point we have just built the graph describing our computation,
    # but we haven't actually computed anything yet. If we print x and x_flat
    # we see that they don't hold any data; they are just TensorFlow Tensors
    # representing values that will be computed when the graph is run.
    print('x: ', type(x), x)
    print('x_flat: ', type(x_flat), x_flat)
    print()

    # We need to use a TensorFlow Session object to actually run the graph.
    with tf.Session() as sess:
        # Construct concrete values of the input data x using numpy
        x_np = np.arange(24).reshape((2, 3, 4))
        print('x_np:\n', x_np, '\n')

        # Run our computational graph to compute a concrete output value.
        # The first argument to sess.run tells TensorFlow which Tensor
        # we want it to compute the value of; the feed_dict specifies
        # values to plug into all placeholder nodes in the graph. The
        # resulting value of x_flat is returned from sess.run as a
        # numpy array.
        x_flat_np = sess.run(x_flat, feed_dict={x: x_np})
        print('x_flat_np:\n', x_flat_np, '\n')

        # We can reuse the same graph to perform the same computation
        # with different input data
        x_np = np.arange(12).reshape((2, 3, 2))
        print('x_np:\n', x_np, '\n')
        x_flat_np = sess.run(x_flat, feed_dict={x: x_np})
        print('x_flat_np:\n', x_flat_np)
test_flatten()

TF2.0

def flatten(x):
    """
    Input:
    - TensorFlow Tensor of shape (N, D1, ..., DM)
    Output:
    - TensorFlow Tensor of shape (N, D1 * ... * DM)
    """
    N = tf.shape(x)[0]
    return tf.reshape(x, (N, -1))
def test_flatten():
    # Construct concrete values of the input data x using numpy
    x_np = np.arange(24).reshape((2, 3, 4))
    print('x_np:\n', x_np, '\n')
    # Compute a concrete output value.
    x_flat_np = flatten(x_np)
    print('x_flat_np:\n', x_flat_np, '\n')
test_flatten()

百度网盘TF1.x和TF2.0函数比对表,提取码5du8
Tf1.0 和 Tf2.0

  • 静态图与动态图
    • tf1.0: Sess、 feed_dict、placeholder 被移除
    • tf1.0: make_on_shot(initializable)_iterator 被移除
    • tf2.0: eager mode, @tf.function 与 AutoGraph

eager mode Vs sess

# TensorFlow 1.X
outputs = sessiion.run(f(placeholder), feed_dict={placeholder: input})
# TensorFlow 2.0
outputs = f(input)
  • eager mode & sess
    • 性能好
    • 可以导入到出为 SavedModel
  • Eg:
    • for/while -> tf.while_loop
    • if -> tf.cond
    • for _ in dataset -> dataset.reduce

API 变动

  • Tensorflow 现在有 2000 个 API, 500 在根空间下
  • 一些空间被建立了但是没有包含所有相关 API
    • tf.round 没有在 tf.math 下
  • 有些在根空间下,但是很少被使用 tf.zeta
  • 有些经常使用,不在根空间下 tf.manip
  • 有些空间层次太深
    • tf.saved_model.signature_constants.CLASSIFY_INPUTS
    • tf.saved_model.CLASSIFY_INPUTS
  • 重复 API
    • tf.layers -> tf.keras.layers
    • tf.losses -> tf.keras.losses
    • tf.metrics -> tf.keras.metrics
  • 有些 API 有前缀,所以应该建立子空间
    • tf.string_strip -> tf.string.strip
  • 重新组织
    • tf.debugging、tf.dtypes、tf.io、tf.quantization 等
本作品采用《CC 协议》,转载必须注明作者和本文链接
不要试图用百米冲刺的方法完成马拉松比赛。
讨论数量: 0
(= ̄ω ̄=)··· 暂无内容!

讨论应以学习和精进为目的。请勿发布不友善或者负能量的内容,与人为善,比聪明更重要!