首页 文章

Tensorflow / Keras:训练期间的模型精度始终为0.5,输入大小与第一个官方教程不同

提问于
浏览
0

我是深度学习和keras / tensorflow的初学者 . 我遵循了tensorflow.org的第一个教程:时尚MNIST的基本分类 .

在这种情况下,输入数据是60000,28x28图像,模型是这样的:

model = keras.Sequential([
    keras.layers.Flatten(input_shape=(28, 28)),
    keras.layers.Dense(128, activation=tf.nn.relu),
    keras.layers.Dense(10, activation=tf.nn.softmax)
])

编译:

model.compile(optimizer=tf.train.AdamOptimizer(), 
                loss='sparse_categorical_crossentropy',
                metrics=['accuracy'])

在训练结束时,模型具有以下精度:

10000/10000 [==============================] - 0s 21us/step
Test accuracy: 0.8769

它's ok. Now I' m试图用另一组数据复制这个模型 . 新输入是从kaggle下载的数据集 .

数据集的图像具有不同大小的狗和猫,所以我创建了一个简单的脚本来获取图像,调整大小为28x28像素并转换为numpy数组 .

这是执行此操作的代码:

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
from tensorflow.keras.models import load_model
from PIL import Image

import os

# Helper libraries
import numpy as np

# base path dataset
base_path = './dataset/'
training_path = base_path + "training_set/"
test_path = base_path + "test_set/"

# size rate of images
size = 28, 28

# 
train_images = []
train_labels = []
test_images = []
test_labels = []

classes = ['dogs', 'cats']

# Scorre sulle cartelle contenute nel path e trasforma le immagini in nparray
def from_files_to_nparray(path):
    images = []
    labels = []
    for subfolder in os.listdir(path):
        if subfolder == '.DS_Store':
            continue

        for image_name in os.listdir(path + subfolder):
            if not image_name.endswith('.jpg'):
                continue

            img = Image.open(path + subfolder + "/" + image_name).convert("L").resize(size) # convert to grayscale and resize
            npimage = np.asarray(img)

            images.append(npimage)
            labels.append(classes.index(subfolder))

            img.close()

    # convertt to np arrays
    images = np.asarray(images)
    labels = np.asarray(labels)

    # Normalize to [0, 1]
    images = images / 255.0 
    return (images, labels)

(train_images, train_labels) = from_files_to_nparray(training_path)
(test_images, test_labels) = from_files_to_nparray(test_path)

最后我有这些形状:

Train images shape   :  (8000, 128, 128)
Labels images shape  :  (8000,)
Test images shape    :  (2000, 128, 128)
Test images shape    :  (2000,)

训练相同的模型(但最后一个密集层格式由2个神经元)我有这个结果,应该没问题:

Train images shape   :  (8000, 28, 28)
Labels images shape  :  (8000,)
Test images shape    :  (2000, 28, 28)
Test images shape    :  (2000,)


_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
flatten (Flatten)            (None, 784)               0         
_________________________________________________________________
dense (Dense)                (None, 128)               100480    
_________________________________________________________________
dense_1 (Dense)              (None, 2)                 258       
=================================================================
Total params: 100,738
Trainable params: 100,738
Non-trainable params: 0
_________________________________________________________________
None

Epoch 1/5
2018-07-27 15:25:51.283117: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
8000/8000 [==============================] - 1s 66us/step - loss: 0.6924 - acc: 0.5466
Epoch 2/5
8000/8000 [==============================] - 0s 39us/step - loss: 0.6679 - acc: 0.5822
Epoch 3/5
8000/8000 [==============================] - 0s 41us/step - loss: 0.6593 - acc: 0.6048
Epoch 4/5
8000/8000 [==============================] - 0s 39us/step - loss: 0.6545 - acc: 0.6134
Epoch 5/5
8000/8000 [==============================] - 0s 39us/step - loss: 0.6559 - acc: 0.6039
2000/2000 [==============================] - 0s 33us/step

Test accuracy:  0.592

现在,问题是,如果我尝试将输入大小从28x28更改为,例如128x128,结果是:

Train images shape   :  (8000, 128, 128)
Labels images shape  :  (8000,)
Test images shape    :  (2000, 128, 128)
Test images shape    :  (2000,)


_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
flatten (Flatten)            (None, 16384)             0         
_________________________________________________________________
dense (Dense)                (None, 128)               2097280   
_________________________________________________________________
dense_1 (Dense)              (None, 2)                 258       
=================================================================
Total params: 2,097,538
Trainable params: 2,097,538
Non-trainable params: 0
_________________________________________________________________
None

Epoch 1/5
2018-07-27 15:27:41.966860: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
8000/8000 [==============================] - 4s 483us/step - loss: 8.0341 - acc: 0.4993
Epoch 2/5
8000/8000 [==============================] - 3s 362us/step - loss: 8.0590 - acc: 0.5000
Epoch 3/5
8000/8000 [==============================] - 3s 351us/step - loss: 8.0590 - acc: 0.5000
Epoch 4/5
8000/8000 [==============================] - 3s 342us/step - loss: 8.0590 - acc: 0.5000
Epoch 5/5
8000/8000 [==============================] - 3s 342us/step - loss: 8.0590 - acc: 0.5000
2000/2000 [==============================] - 0s 217us/step

Test accuracy:  0.5

为什么?虽然添加新的密集层或增加神经元数量,但结果是相同的 .

输入大小和模型层之间的联系是什么?谢谢!

1 回答

  • 2

    问题是你在第二个例子中有更多的参数需要训练 . 在第一个示例中,您只有100k参数 . 你用8k图像训练他们 .

    在第二个示例中,您有2000k参数,并尝试使用相同数量的图像训练它们 . 这不起作用,因为自由参数和样本数之间存在关系 . 没有确切的公式来计算这种关系,但是有一条经验法则,你应该有比训练参数更多的样本 .

    您可以尝试用它来训练更多的时代并了解它的工作原理,但一般来说,您需要更多数据来处理更复杂的模型 .

相关问题