I'm aiming to build an index that, for each document, will break it down by word ngrams (uni, bi, and tri), then capture term vector analysis on all of those word ngrams. Is that possible with Elasticsearch?
For instance, for a document field containing "The red car drives." I would be able to get the information:
red - 1 instance
car - 1 instance
drives - 1 instance
red car - 1 instance
car drives - 1 instance
red car drives - 1 instance
Thanks in advance!
Assuming you already know about the Term Vectors api you could apply the shingle token filter at index time to add those terms as independent to each other in the token stream.
Setting min_shingle_size
to 1 (instead of the default of 2), and max_shingle_size
to at least 3 (instead of the default of 2)
And based on the fact that you left "the" out of the possible terms you should use stop words filter before applying shingles filter.
The analyzer settings would be something like this:
{
"settings": {
"analysis": {
"analyzer": {
"evolutionAnalyzer": {
"tokenizer": "standard",
"filter": [
"standard",
"lowercase",
"custom_stop",
"custom_shingle"
]
}
},
"filter": {
"custom_stop": {
"type": "stop",
"stopwords": "_english_",
"enable_position_increments":"false"
},
"custom_shingle": {
"type": "shingle",
"min_shingle_size": "1",
"max_shingle_size": "3"
}
}
}
}
}
You can test the analyzer using the _analyze
api endpoint.