How to handle padding when using sequence_length p

2020-07-20 03:54发布

I'm trying to use the dynamic_rnn function in Tensorflow to speed up training. After doing some reading, my understanding is that one way to speed up training is to explicitly pass a value to the sequence_length parameter in this function. After a bit more reading, and finding this SO explanation, it seems like what I need to pass is a vector (maybe defined by a tf.placeholder) that contains the length of each sequence within a batch.

Here's where I'm confused: in order to take advantage of this, should I pad each of my batches to the longest-length sequence within the batch instead of the longest-length sequence in the training set? How does Tensorflow handle the remaining zeros/pad-tokens in any of the shorter sequences? Also, is the main advantage here really speed, or just extra assurance that we're masking pad-tokens during training? Any help/context would be appreciated.

1条回答
▲ chillily
2楼-- · 2020-07-20 04:31

should I pad each of my batches to the longest-length sequence within the batch instead of the longest-length sequence in the training set?

The sequences within a batch must be aligned, i.e., have to have the same length. So the general answer to your question is "yes". But different batches doesn't have to be of the same length, so you can stratify input sequences into groups that have roughly the same size and pad them accordingly. This technique is called bucketing and you can read about it in this tutorial.

How does Tensorflow handle the remaining zeros/pad-tokens in any of the shorter sequences?

Pretty much intuitive. tf.nn.dynamic_rnn returns two tensors: output and states. Suppose the actual sequence length is t and the padded sequence length is T.

Then the output will contain zeros after i > t and states will contain the t-th cell state, ignoring the states of trailing cells.

Here's an example:

import numpy as np
import tensorflow as tf

n_steps = 2
n_inputs = 3
n_neurons = 5

X = tf.placeholder(dtype=tf.float32, shape=[None, n_steps, n_inputs])
seq_length = tf.placeholder(tf.int32, [None])

basic_cell = tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons)
outputs, states = tf.nn.dynamic_rnn(basic_cell, X, 
                                    sequence_length=seq_length, dtype=tf.float32)

X_batch = np.array([
  # t = 0      t = 1
  [[0, 1, 2], [9, 8, 7]], # instance 0
  [[3, 4, 5], [0, 0, 0]], # instance 1
  [[6, 7, 8], [6, 5, 4]], # instance 2
])
seq_length_batch = np.array([2, 1, 2])

with tf.Session() as sess:
  sess.run(tf.global_variables_initializer())
  outputs_val, states_val = sess.run([outputs, states], feed_dict={
    X: X_batch, 
    seq_length: seq_length_batch
  })
  print(outputs_val)
  print()
  print(states_val)

Note that instance 1 is padded, so outputs_val[1,1] is a zero vector and states_val[1] == outputs_val[1,0]:

[[[ 0.76686853  0.8707901  -0.79509073  0.7430128   0.63775384]
  [ 1.          0.7427926  -0.9452815  -0.93113345 -0.94975543]]

 [[ 0.9998851   0.98436266 -0.9620067   0.61259484  0.43135557]
  [ 0.          0.          0.          0.          0.        ]]

 [[ 0.99999994  0.9982034  -0.9934515   0.43735617  0.1671598 ]
  [ 0.99999785 -0.5612586  -0.57177305 -0.9255771  -0.83750355]]]

[[ 1.          0.7427926  -0.9452815  -0.93113345 -0.94975543]
 [ 0.9998851   0.98436266 -0.9620067   0.61259484  0.43135557]
 [ 0.99999785 -0.5612586  -0.57177305 -0.9255771  -0.83750355]]

Also, is the main advantage here really speed, or just extra assurance that we're masking pad-tokens during training?

Of course, batch processing is more efficient, than feeding the sequences one by one. But the main advantage of specifying the length is that you get the reasonable state out of RNN, i.e., padded items don't affect the result tensor. You will get exactly the same result (and the same speed) if you don't set the length, but select the right states manually.

查看更多
登录 后发表回答