我想训练一个带有时间序列数据的卷积自动编码器来学习特征并在以后做出预测 . 我将整个数据放在一个数据帧中,并拥有100百万行数据(1列) . 从每次测量我得到40,000分,所以我测量了2500次 . 我知道我不能教网络一次学习所有数据吗?如何更改其逐步学习的代码并进行预测如何更改图层大小?我从https://blog.keras.io/building-autoencoders-in-keras.html获取代码,并希望将其调整为我的情况,但我输入的形状出错了,我认为还有更多内容需要修复 . 或者LSTM自动编码器更好?当你意识到我是这个话题的新手时,我现在不知道从哪里开始解决我的问题 . 如果您需要更具体的信息,我会尽力解释 .

from keras.layers import Input, Dense, Conv2D, MaxPooling2D, 
UpSampling2D
from keras.models import Model
from keras import backend as K
input_img = Input(shape=(40000,)
x = Conv2D(16, (3, 3), activation='relu', padding='same')(input_img)
x = MaxPooling2D((2 , 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
encoded = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(encoded)
x = UpSampling2D((2, 2))(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2D(16, (3, 3), activation='relu')(x)
x = UpSampling2D((2, 2))(x)
decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same') 
(x)autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy') 
from sklearn.model_selection import train_test_split
x_train, x_test = train_test_split(input_x, test_size=0.1)
x_train= [x_train, x_train]
x_train = np.array([x_train,x_train])
from keras.callbacks import TensorBoard
autoencoder.fit(x_train, x_train,
            epochs=10,
            batch_size=40000,
            shuffle=False,
            validation_data=(x_test, x_test),
            callbacks=[TensorBoard(log_dir='/tmp/autoencoder')])