Is there any way I can get the Universal dependencies using python, or nltk?I can only produce the parse tree.
Example:
Input sentence:
My dog also likes eating sausage.
Output:
Universal dependencies
nmod:poss(dog-2, My-1)
nsubj(likes-4, dog-2)
advmod(likes-4, also-3)
root(ROOT-0, likes-4)
xcomp(likes-4, eating-5)
dobj(eating-5, sausage-6)
Wordseer's stanford-corenlp-python fork is a good start as it works with the recent CoreNLP release (3.5.2). However it will give you raw output, which you need manually transform. For example, given you have the wrapper running:
In case you want to use dependency parser, you can reuse NLTK's DependencyGraph with a bit of effort
Setting up CoreNLP is not that hard, check http://www.eecs.qmul.ac.uk/~dm303/stanford-dependency-parser-nltk-and-anaconda.html for more details.