Restore original text from Keras’s imdb dataset
I want to restore imdb’s original text from Keras’s imdb dataset.
First, when I load Keras’s imdb dataset, it returned sequence of word index.
>>> (X_train, y_train), (X_test, y_test) = imdb.load_data()
>>> X_train[0]
[1, 14, 22, 16, 43, 530, 973, 1622, 1385, 65, 458, 4468, 66, 3941, 4, 173, 36, 256, 5, 25, 100, 43, 838, 112, 50, 670, 22665, 9, 35, 480, 284, 5, 150, 4, 172, 112, 167, 21631, 336, 385, 39, 4, 172, 4536, 1111, 17, 546, 38, 13, 447, 4, 192, 50, 16, 6, 147, 2025, 19, 14, 22, 4, 1920, 4613, 469, 4, 22, 71, 87, 12, 16, 43, 530, 38, 76, 15, 13, 1247, 4, 22, 17, 515, 17, 12, 16, 626, 18, 19193, 5, 62, 386, 12, 8, 316, 8, 106, 5, 4, 2223, 5244, 16, 480, 66, 3785, 33, 4, 130, 12, 16, 38, 619, 5, 25, 124, 51, 36, 135, 48, 25, 1415, 33, 6, 22, 12, 215, 28, 77, 52, 5, 14, 407, 16, 82, 10311, 8, 4, 107, 117, 5952, 15, 256, 4, 31050, 7, 3766, 5, 723, 36, 71, 43, 530, 476, 26, 400, 317, 46, 7, 4, 12118, 1029, 13, 104, 88, 4, 381, 15, 297, 98, 32, 2071, 56, 26, 141, 6, 194, 7486, 18, 4, 226, 22, 21, 134, 476, 26, 480, 5, 144, 30, 5535, 18, 51, 36, 28, 224, 92, 25, 104, 4, 226, 65, 16, 38, 1334, 88, 12, 16, 283, 5, 16, 4472, 113, 103, 32, 15, 16, 5345, 19, 178, 32]
I found imdb.get_word_index method(), it returns word index dictionary like {‘create’: 984, ‘make’: 94,…}. For converting, I create index word dictionary.
>>> word_index = imdb.get_word_index()
>>> index_word = {v:k for k,v in word_index.items()}
Then, I tried to restore original text like following.
>>> ' '.join(index_word.get(w) for w in X_train[5])
"the effort still been that usually makes for of finished sucking ended cbc's an because before if just though something know novel female i i slowly lot of above freshened with connect in of script their that out end his deceptively i i"
I’m not good at English, but I know this sentence is something strange.
Why is this happened? How can I restore original text?
To get an equivalent array of all the reviews:
This encoding will work along with the labels:
Upvote if helps. :)
This happened because of a basic
NLP
data preparation. Loads of the so called stop words were removed from text in order to make learning feasible. Usually - also the most of puntuation and less frequent words are removed from text during preprocessing. I think that the only way to restore original text is to find the most matching texts at IMDB using e.g. a Google's browser API.The indices are offset by 3 because 0, 1 and 2 are reserved indices for "padding", "start of sequence" and "unknown". The following should work.
This works for me:
Your example is coming out as gibberish, it's much worse than just some missing stop words.
If you re-read the docs for the
start_char
,oov_char
, andindex_from
parameters of the [keras.datasets.imdb.load_data
](https://keras.io/datasets/#imdb-movie-reviews-sentiment-classification ) method they explain what is happening:start_char
: int. The start of a sequence will be marked with this character. Set to 1 because 0 is usually the padding character.oov_char
: int. words that were cut out because of the num_words or skip_top limit will be replaced with this character.index_from
: int. Index actual words with this index and higher.That dictionary you inverted assumes the word indices start from
1
.But the indices returned my keras have
<START>
and<UNKNOWN>
as indexes1
and2
. (And it assumes you will use0
for<PADDING>
).This works for me:
The punctuation is missing, but that's all: