Tensorflow 1.x 图像分类

API 说明

  • Tf1.0 实现全连接网络
    • placeholder, tf.layers.dense, tf.train.AdamOptimizer
    • tf.losses.sparse_softmax_cross_entropy,
    • tf.global_variables_initializer 「初始化参数」
    • feed_dict 「填充数据」
  • Dataset
    初始化 Dataset:[这两个 API 在 Tf2.0 中已经弃用]
    • Dataset.make_one_shot_iterator
    • Dataset.make_initializable_iterator
  • 自定义 estimator
    • Tf.feature_column.input_layer
    • Tf.estimator.EstimatorSpec
    • Tf.metrics.accuracy

图像分类

导入包

import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import sklearn
import pandas as pd
import os
import sys
import time
import tensorflow as tf

from tensorflow import keras
print(tf.__version__)
print(sys.version_info)
for module in mpl, np, pd, sklearn, tf, keras:
    print(module.__name__, module.__version__)

输出:

1.13.1
sys.version_info(major=3, minor=7, micro=4, releaselevel=’final’, serial=0)
matplotlib 3.1.3
numpy 1.18.1
pandas 1.0.1
sklearn 0.22.1
tensorflow 1.13.1
tensorflow._api.v1.keras 2.2.4-tf

数据预处理:

fashion_mnist = keras.datasets.fashion_mnist
(x_train_all, y_train_all), (x_test, y_test) = fashion_mnist.load_data()
x_valid, x_train = x_train_all[:5000], x_train_all[5000:]
y_valid, y_train = y_train_all[:5000], y_train_all[5000:]

print(x_valid.shape, y_valid.shape)
print(x_train.shape, y_train.shape)
print(x_test.shape, y_test.shape)

输出:

(5000, 28, 28) (5000,)
(55000, 28, 28) (55000,)
(10000, 28, 28) (10000,)

print(np.max(x_train), np.min(x_train)) output-> 255 0

归一化:

from sklearn.preprocessing import StandardScaler
# x_train: [None, 28, 28] -> [None, 784]
scaler = StandardScaler()
x_train_scaled = scaler.fit_transform(
    x_train.astype(np.float32).reshape(-1, 1)).reshape(-1, 28 * 28)
x_valid_scaled = scaler.transform(
    x_valid.astype(np.float32).reshape(-1, 1)).reshape(-1, 28 * 28)
x_test_scaled = scaler.transform(
    x_test.astype(np.float32).reshape(-1, 1)).reshape(-1, 28 * 28)

print(np.max(x_train_scaled), np.min(x_train_scaled)) output-> 2.0231433 -0.8105136

构建计算图:

# 定义网络结构
hidden_units = [100, 100] # 定义一个数组, 2层神经网络, 每层有100个单元
class_num = 10 # 10个类别

x = tf.placeholder(tf.float32, [None, 28 * 28]) # 占位符, 浮点数定义, None是batch_size的大小
y = tf.placeholder(tf.int64, [None])

# 定义临时变量做遍历
input_for_next_layer = x
# 定义整个网络的隐含层
for hidden_unit in hidden_units:
    input_for_next_layer = tf.layers.dense(input_for_next_layer, # 输入
                                           hidden_unit, # 数目
                                           activation=tf.nn.relu) # 激活函数
# 定义输出层
logits = tf.layers.dense(input_for_next_layer,
                        class_num)
# 定义损失函数
# logits = last_hidden_output * W(logits) -> softmax -> prob
# 1. logit -> softmax -> prob
# 2. labels -> one_hot
# 3. calculate cross entropy
loss = tf.losses.sparse_softmax_cross_entropy(labels = y, logits = logits)

# 计算准确率(get accuracy.)
prediction = tf.argmax(logits, 1) # logits最大值的索引
correct_prediction = tf.equal(prediction, y) # 输出[0,1,...]向量,0代表错误,1代表正确
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # 转化为浮点数类型,然后求平均

# 训练一次网络的定义
train_op = tf.train.AdamOptimizer(1e-3).minimize(loss)

print(x)print(logits) output -> Tensor(“Placeholder:0”, shape=(?, 784), dtype=float32)
Tensor(“dense_2/BiasAdd:0”, shape=(?, 10), dtype=float32)

定义训练图程序:

init = tf.global_variables_initializer()
batch_size = 20
epochs = 10
train_steps_per_epoch = x_train.shape[0] // batch_size
valid_steps = x_valid.shape[0] // batch_size

def eval_with_sess(sess, x, y, accuracy, images, labels, batch_size):
    eval_steps = images.shape[0] // batch_size
    eval_accuracies = []
    for step in range(eval_steps):
        batch_data = images[step * batch_size : (step+1) * batch_size]
        batch_label = labels[step * batch_size : (step+1) * batch_size]
        accuracy_val = sess.run(accuracy,
                               feed_dict = {
                                   x: batch_data,
                                   y: batch_label
                               })
        eval_accuracies.append(accuracy_val)
    return np.mean(eval_accuracies)

# 打开一个 Session
with tf.Session() as sess:
    # 初始化
    sess.run(init)
    for epoch in range(epochs):
        for step in range(train_steps_per_epoch):
            batch_data = x_train_scaled[
                # 取出对应位置的样本
                step * batch_size:(step+1) * batch_size]
            batch_label = y_train[
                step * batch_size:(step+1) * batch_size]
            # 放入图中进行训练
            loss_val, accuracy_val, _ = sess.run(
                [loss, accuracy, train_op],
                feed_dict = {
                    x: batch_data,
                    y: batch_label
                })
            # 打印训练过程
            print('\r[Train] epoch: %d, step: %d, loss: %3.5f, accuracy: %2.2f' %
                  (epoch, step, loss_val, accuracy_val), end="")
        valid_accuracy = eval_with_sess(sess, x, y, accuracy,
                                        x_valid_scaled, y_valid,
                                        batch_size)
        print("\t[Valid] acc: %2.2f" % (valid_accuracy))

输出:

[Train] epoch: 0, step: 2749, loss: 0.28534, accuracy: 0.85 [Valid] acc: 0.86
[Train] epoch: 1, step: 2749, loss: 0.16660, accuracy: 0.90 [Valid] acc: 0.87
[Train] epoch: 2, step: 2749, loss: 0.13731, accuracy: 0.95 [Valid] acc: 0.88
[Train] epoch: 3, step: 2749, loss: 0.12373, accuracy: 0.95 [Valid] acc: 0.88
[Train] epoch: 4, step: 2749, loss: 0.12829, accuracy: 0.95 [Valid] acc: 0.88
[Train] epoch: 5, step: 2749, loss: 0.11939, accuracy: 1.00 [Valid] acc: 0.88
[Train] epoch: 6, step: 2749, loss: 0.10637, accuracy: 0.95 [Valid] acc: 0.88
[Train] epoch: 7, step: 2749, loss: 0.11134, accuracy: 0.95 [Valid] acc: 0.88
[Train] epoch: 8, step: 2749, loss: 0.09228, accuracy: 0.95 [Valid] acc: 0.89
[Train] epoch: 9, step: 2749, loss: 0.12499, accuracy: 0.95 [Valid] acc: 0.88

本作品采用《CC 协议》,转载必须注明作者和本文链接
不要试图用百米冲刺的方法完成马拉松比赛。
讨论数量: 0
(= ̄ω ̄=)··· 暂无内容!

讨论应以学习和精进为目的。请勿发布不友善或者负能量的内容,与人为善,比聪明更重要!