I'm currently using the Keras Tokenizer to create a word index and then matching that word index to the the imported GloVe dictionary to create an embedding matrix. However, the problem I have is that this seems to defeat one of the advantages of using a word vector embedding since when using the trained model for predictions if it runs into a new word that's not in the tokenizer's word index it removes it from the sequence.
#fit the tokenizer
tokenizer = Tokenizer()
tokenizer.fit_on_texts(texts)
word_index = tokenizer.word_index
#load glove embedding into a dict
embeddings_index = {}
dims = 100
glove_data = 'glove.6B.'+str(dims)+'d.txt'
f = open(glove_data)
for line in f:
values = line.split()
word = values[0]
value = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = value
f.close()
#create embedding matrix
embedding_matrix = np.zeros((len(word_index) + 1, dims))
for word, i in word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
# words not found in embedding index will be all-zeros.
embedding_matrix[i] = embedding_vector[:dims]
#Embedding layer:
embedding_layer = Embedding(embedding_matrix.shape[0],
embedding_matrix.shape[1],
weights=[embedding_matrix],
input_length=12)
#then to make a prediction
sequence = tokenizer.texts_to_sequences(["Test sentence"])
model.predict(sequence)
So is there a way I can still use the tokenizer to transform sentences into an array and still use as much of the words GloVe dictionary as I can instead of only the ones that show up in my training text?
Edit: Upon further contemplation, I guess one option would be to add a text or texts to the texts that the tokenizer is fit on that includes a list of the keys in the glove dictionary. Though that might mess with some of the statistics if I want to use tf-idf. Is there either a preferable way to doing this or a different better approach?