首页 文章

将Tensorflow代码转换为Keras

提问于
浏览
0

我是CNN的新手,最近遇到了Keras . 我试图在keras中编写我的tensorflow代码,但感到困惑 . 这是我的张量流代码 .

#Input Data
tf_train_dataset = tf.placeholder(tf.float32,
                                      shape=(batch_size,  1, nfeatures, num_channels))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))

#Variables
 # layer 1
weights_l1 = tf.Variable(
    tf.truncated_normal([patch_size, patch_size, num_channels, depth], stddev=0.01),name="weights_l1")
biases_l1 = tf.Variable(tf.zeros([depth]),name="biases_l1")

#layer 2
weights_l2 = tf.Variable(
    tf.truncated_normal([(block_sizeX * block_sizeY * 8), num_hidden], stddev=0.01),name="weights_l2")
biases_l2 = tf.Variable(tf.zeros([num_hidden]),name="biases_l2")

# output layer
weights = tf.Variable(
    tf.truncated_normal([num_hidden, num_labels], stddev=0.01),name="weights")
biases = tf.Variable(tf.zeros([num_labels]),name="biases")

global_step = tf.Variable(0.0,name="globalStep") 
init_var = tf.initialize_all_variables() # operation init

#Model
 def setupNN(dSet, bDropout):
    input_l1 = tf.nn.conv2d(dSet, weights_l1, [1, 2, 2, 1], padding = 'SAME') 
    output_l1 = tf.nn.relu(input_l1 + biases_l1)

    shape = output_l1.get_shape().as_list()
    reshape = tf.reshape(output_l1, [shape[0], shape[1] * shape[2] * shape[3]])
    output_l3 = tf.nn.relu(tf.matmul(reshape, weights_l2) + biases_l2)
    return tf.matmul(output_l3, weights) + biases

现在,当我将此代码转换为Keras时,我对于为num_filter和(kernel_1,kernel_2)匹配tensorflow的参数感到困惑?这些只是我写的keras代码的摘录 . 不是整个模型 .

#First layer
model.add(Convolution2D(num_filter, (kernel_1, kernel_2), input_shape = (1, nfeatures, 1), activation = 'relu'))
#Second Layer
model.add(Convolution2D(num_filter, (kernel_1, kernel_2), input_shape = (1, nfeatures, 1), activation = 'relu'))

当我添加kernel_initializers = **时,它是否会自行处理权重?

#Dense Layer
model.add(Flatten())
model.add(Dense(128, activation = 'relu', kernel_initializer=initializers.random_normal(stddev=0.01), bias_initializer = 'zeros'))

我知道我可以从fit()函数输入batch_size

model.fit(train_dataset, train_labels, batch_size=100, nb_epoch=10, verbose=1,
           validation_split=0.2, callbacks=[EarlyStopping(monitor='val_loss', patience=3)])

1 回答

  • 1

    对于你的第一个问题,

    num_filter 是特征输出空间,( kernel_1kernel_2 )是滤波器大小/卷积窗口 . 在您的情况下,您可以看到您的TensorFlow代码使用 tf.nn.conv2d . 如果你看documentation,你可以看到 filter 是你的情况 weights_l1 的第二个参数 . 就此而言 patch_size, patch_size 相当于内核大小而 depth 则相当于 num_filter .

    对于第二个问题,如果添加 kernel_initializers=** ,这意味着权重或变量用值初始化 . 同样的概念适用于TensorFlow,因为权重是一个变量,需要在开头用值初始化 .

相关问题