Keras embedding layer masking. Why does input_dim

2020-07-11 05:47发布

问题:

In the Keras docs for Embedding https://keras.io/layers/embeddings/, the explanation given for mask_zero is

mask_zero: Whether or not the input value 0 is a special "padding" value that should be masked out. This is useful when using recurrent layers which may take variable length input. If this is True then all subsequent layers in the model need to support masking or an exception will be raised. If mask_zero is set to True, as a consequence, index 0 cannot be used in the vocabulary (input_dim should equal |vocabulary| + 2).

Why does input_dim need to be 2 + number of words in vocabulary? Assuming 0 is masked and can't be used, shouldn't it just be 1 + number of words? What is the other extra entry for?

回答1:

I believe the docs are a bit misleading there. In the normal case you are mapping your n input data indices [0, 1, 2, ..., n-1] to vectors, so your input_dim should be as many elements as you have

input_dim = len(vocabulary_indices)

An equivalent (but slightly confusing) way to say this, and the way the docs do, is to say

1 + maximum integer index occurring in the input data.

input_dim = max(vocabulary_indices) + 1

If you enable masking, value 0 is treated differently, so you increment your n indices by one: [0, 1, 2, ..., n-1, n], thus you need

input_dim = len(vocabulary_indices) + 1

or alternatively

input_dim = max(vocabulary_indices) + 2

The docs become especially confusing here as they say

(input_dim should equal |vocabulary| + 2)

where I would interpret |x| as the cardinality of a set (equivalent to len(x)), but the authors seem to mean

2 + maximum integer index occurring in the input data.



回答2:

Because the input_dim already is +1 of the vocabulary, so you just add another +1 for the 0 and get the +2.

input_dim: int > 0. Size of the vocabulary, ie. 1 + maximum integer index occurring in the input data.