我是新与Keras,Tensorflow,Python和我试图建立个人使用/未来学习的典范。 我刚开始使用Python和我想出了这个代码(有视频和教程帮助)。 我的问题是我的Python的内存使用缓慢,每个时期,甚至建设新模式后攀升。 一旦内存是100%的培训只是停止,没有错误/警告。 我不知道太多,但这个问题应该是在循环中的某个地方(如果我可不是误)。 我知道
k.clear.session()
但任何未被删除或我不知道如何把它在我的代码整合问题。 我有:Python的v 3.6.4,Tensorflow 2.0.0rc1(CPU版本),Keras 2.3.0
这是我的代码:
import pandas as pd
import os
import time
import tensorflow as tf
import numpy as np
import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, LSTM, BatchNormalization
from tensorflow.keras.callbacks import TensorBoard, ModelCheckpoint
EPOCHS = 25
BATCH_SIZE = 32
df = pd.read_csv("EntryData.csv", names=['1SH5', '1SHA', '1SA5', '1SAA', '1WH5', '1WHA',
'2SA5', '2SAA', '2SH5', '2SHA', '2WA5', '2WAA',
'3R1', '3R2', '3R3', '3R4', '3R5', '3R6',
'Target'])
df_val = 14554
validation_df = df[df.index > df_val]
df = df[df.index <= df_val]
train_x = df.drop(columns=['Target'])
train_y = df[['Target']]
validation_x = validation_df.drop(columns=['Target'])
validation_y = validation_df[['Target']]
train_x = np.asarray(train_x)
train_y = np.asarray(train_y)
validation_x = np.asarray(validation_x)
validation_y = np.asarray(validation_y)
train_x = train_x.reshape(train_x.shape[0], 1, train_x.shape[1])
validation_x = validation_x.reshape(validation_x.shape[0], 1, validation_x.shape[1])
dense_layers = [0, 1, 2]
layer_sizes = [32, 64, 128]
conv_layers = [1, 2, 3]
for dense_layer in dense_layers:
for layer_size in layer_sizes:
for conv_layer in conv_layers:
NAME = "{}-conv-{}-nodes-{}-dense-{}".format(conv_layer, layer_size,
dense_layer, int(time.time()))
tensorboard = TensorBoard(log_dir="logs\{}".format(NAME))
print(NAME)
model = Sequential()
model.add(LSTM(layer_size, input_shape=(train_x.shape[1:]),
return_sequences=True))
model.add(Dropout(0.2))
model.add(BatchNormalization())
for l in range(conv_layer-1):
model.add(LSTM(layer_size, return_sequences=True))
model.add(Dropout(0.1))
model.add(BatchNormalization())
for l in range(dense_layer):
model.add(Dense(layer_size, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(2, activation='softmax'))
opt = tf.keras.optimizers.Adam(lr=0.001, decay=1e-6)
# Compile model
model.compile(loss='sparse_categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'])
# unique file name that will include the epoch
# and the validation acc for that epoch
filepath = "RNN_Final.{epoch:02d}-{val_accuracy:.3f}"
checkpoint = ModelCheckpoint("models\{}.model".format(filepath,
monitor='val_acc', verbose=0, save_best_only=True,
mode='max')) # saves only the best ones
# Train model
history = model.fit(
train_x, train_y,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
validation_data=(validation_x, validation_y),
callbacks=[tensorboard, checkpoint])
# Score model
score = model.evaluate(validation_x, validation_y, verbose=2)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
# Save model
model.save("models\{}".format(NAME))
我也知道鸵鸟政策如果it's可能要问1个问题中的2个问题(我不希望我的问题,这与任何人任何蟒蛇的经验可以在一分钟内解决垃圾邮件在这里),但我也有问题检查点保存。 我想保存只表现最好的模型(1元1种NN规格型号 - 节点/层的数量),但目前它是每个时间段后保存。 如果这是不恰当的问我能为此创造另一个问题。
非常感谢您的任何帮助。