tf.nn.embedding_lookup(params, ids, partition_strategy='mod', name=None)
I cannot understand the duty of this function. Is it like a lookup table? Which means to return the parameters corresponding to each id (in ids)?
For instance, in the skip-gram
model if we use tf.nn.embedding_lookup(embeddings, train_inputs)
, then for each train_input
it finds the correspond embedding?
Here's an image depicting the process of embedding lookup.
Concisely, it gets the corresponding rows of a embedding layer, specified by a list of IDs and provide that as a tensor. It is achieved through the following process.
lookup_ids = tf.placeholder([10])
embeddings = tf.Variable([100,10],...)
embed_lookup = tf.embedding_lookup(embeddings, lookup_ids)
lookup = session.run(embed_lookup, feed_dict={lookup_ids:[95,4,14]})
When the params tensor is in high dimensions, the ids only refers to top dimension. Maybe it's obvious to most of people but I have to run the following code to understand that:
Just trying the 'div' strategy and for one tensor, it makes no difference.
Here is the output: