I have a Word2Vec
model which is trained in Gensim
. How can I use it in Tensorflow
for Word Embeddings
. I don't want to train Embeddings from scratch in Tensorflow. Can someone tell me how to do it with some example code?
相关问题
- how to define constructor for Python's new Nam
- streaming md5sum of contents of a large remote tar
- batch_dot with variable batch size in Keras
- How to get the background from multiple images by
- Evil ctypes hack in python
Let's assume you have a dictionary and inverse_dict list, with index in list corresponding to most common words:
Notice how the inverse_dict index corresponds to the dictionary values. Now declare your embedding matrix and get the values:
You've got your embeddings matrix. Good. Now let's assume you want to train on the sample:
x = ['hello', 'world']
. But this doesn't work for our neural net. We need to integerize:Now we are good to go with embedding our samples on-the-fly
Now
embedded_x
goes into your convolution or whatever. I am also assuming you are not retraining the embeddings, but simply using them. Hope that helps