I am trying to learn how to build RNN for Speech Recognition using TensorFlow. As a start, I wanted to try out some example models put up on TensorFlow page TF-RNN
As per what was advised, I had taken some time to understand how word IDs are embedded into a dense representation (Vector Representation) by working through the basic version of word2vec model code. I had an understanding of what tf.nn.embedding_lookup
actually does, until I actually encountered the same function being used with two dimensional array in TF-RNN ptb_word_lm.py
, when it did not make sense any more.
what I though tf.nn.embedding_lookup
does:
Given a 2-d array params
, and a 1-d array ids
, function tf.nn.embedding_lookup
fetches rows from params, corresponding to the indices given in ids
, which holds with the dimension of output it is returning.
What I am confused about:
When tried with same params, and 2-d array ids
, tf.nn.embedding_lookup
returns 3-d array, instead of 2-d which I do not understand why.
I looked up the manual for Embedding Lookup, but I still find it difficult to understand how the partitioning works, and the result that is returned. I recently tried some simple example with tf.nn.embedding_lookup
and it appears that it returns different values each time. Is this behaviour due to the randomness involved in partitioning ?
Please help me understand how tf.nn.embedding_lookup
works, and why is used in both word2vec_basic.py
, and ptb_word_lm.py
i.e., what is the purpose of even using them ?