I'm currently working with a Keras model which has a embedding layer as first layer. In order to visualize the relationships and similarity of words between each other I need a function that returns the mapping of words and vectors of every element in the vocabulary (e.g. 'love' - [0.21, 0.56, ..., 0.65, 0.10]).
Is there any way to do it?
You can get the word embeddings by using the
get_weights()
method of the embedding layer (i.e. essentially the weights of an embedding layer are the embedding vectors):