I'm exploring the different feature extraction classes that scikit-learn
provides. Reading the documentation I did not understand very well what DictVectorizer
can be used for? Other questions come to mind. For example, how can DictVectorizer
be used for text classification?, i.e. how does this class help handle labelled textual data? Could anybody provide a short example apart from the example that I already read at the documentation web page?
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
回答1:
say your feature space is length, width and height and you have had 3 observations; i.e. you measure length, width & height of 3 objects:
length width height
obs.1 1 0 2
obs.2 0 1 1
obs.3 3 2 1
another way to show this is to use a list of dictionaries:
[{'height': 1, 'length': 0, 'width': 1}, # obs.2
{'height': 2, 'length': 1, 'width': 0}, # obs.1
{'height': 1, 'length': 3, 'width': 2}] # obs.3
DictVectorizer
goes the other way around; i.e given the list of dictionaries builds the top frame:
>>> from sklearn.feature_extraction import DictVectorizer
>>> v = DictVectorizer(sparse=False)
>>> d = [{'height': 1, 'length': 0, 'width': 1},
... {'height': 2, 'length': 1, 'width': 0},
... {'height': 1, 'length': 3, 'width': 2}]
>>> v.fit_transform(d)
array([[ 1., 0., 1.], # obs.2
[ 2., 1., 0.], # obs.1
[ 1., 3., 2.]]) # obs.3
# height, len., width