Measuring semantic similarity between two phrases

2020-05-15 00:11发布

问题:

I want to measure semantic similarity between two phrases/sentences. Is there any framework that I can use directly and reliably?

I have already checked out this question, but its pretty old and I couldn't find real helpful answer there. There was one link, but I found this unreliable.

e.g.:
I have a phrase: felt crushed
I have several choices: force inwards,pulverized, destroyed emotionally, reshaping etc.
I want to find the term/phrase with highest similarity to the first one.
The answer here is: destroyed emotionally.

The bigger picture is: I want to identify which frame from FrameNet matches to the given verb as per its usage in a sentence.

Update : I found this library very useful for measuring similarity between two words. Also the ConceptNet similarity mechanism is very good.

and this library for measuring semantic similarity between sentences

If anyone has any insights please share.

回答1:

This is a very complicated problem.

The main technique that I can think of (before going into more complicated NLP processes) would be to apply cosine (or any other metric) similarity to each pair of phrases. Obviously this solution would be very inefficient at the moment due to the non-matching problem: The sentences might refer to the same concept with different words.

To solve this issue, you should transform the initial representation of each phrase with a more "conceptual" meaning. One option would be to extend each word with its synonyms (i.e. using WordNet, another option is to apply metrics such as distributional semantics DS (http://liawww.epfl.ch/Publications/Archive/Besanconetal2001.pdf) that extend the representation of each term with the more likely words to appear with it.

Example: A representation of a document: {"car","race"} would be transform to {"car","automobile","race"} with synonyms. While, with DS it would be something like: {"car","wheel","road","pilot", ...}

Obviously this transformation won't be binary. Each term will have some associated weights.

I hope this helps.



回答2:

Maybe the cortical.io API could help with your problem. The approach here is that every word is converted into a semantic fingerprint that characterizes the meaning of it with 16K semantic features. Phrases, sentences or longer texts are converted into fingerprints by ORing the word fingerprints together. After this conversion into a (numeric) binary vector representation semantic distance can easily be computed using distance measures like Euclidian Distance or cosine-similarity. All necessary conversion- and comparison-functions are provided by the api.