Is it possible to use Stanford Parser in NLTK? (I am not talking about Stanford POS.)
相关问题
- how to define constructor for Python's new Nam
- streaming md5sum of contents of a large remote tar
- How to get the background from multiple images by
- Evil ctypes hack in python
- Correctly parse PDF paragraphs with Python
Note that this answer applies to NLTK v 3.0, and not to more recent versions.
Sure, try the following in Python:
Output:
Note 1: In this example both the parser & model jars are in the same folder.
Note 2:
Note 3: The englishPCFG.ser.gz file can be found inside the models.jar file (/edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz). Please use come archive manager to 'unzip' the models.jar file.
Note 4: Be sure you are using Java JRE (Runtime Environment) 1.8 also known as Oracle JDK 8. Otherwise you will get: Unsupported major.minor version 52.0.
Installation
Download NLTK v3 from: https://github.com/nltk/nltk. And install NLTK:
sudo python setup.py install
You can use the NLTK downloader to get Stanford Parser, using Python:
Try my example! (don't forget the change the jar paths and change the model path to the ser.gz location)
OR:
Download and install NLTK v3, same as above.
Download the latest version from (current version filename is stanford-parser-full-2015-01-29.zip): http://nlp.stanford.edu/software/lex-parser.shtml#Download
Extract the standford-parser-full-20xx-xx-xx.zip.
Create a new folder ('jars' in my example). Place the extracted files into this jar folder: stanford-parser-3.x.x-models.jar and stanford-parser.jar.
As shown above you can use the environment variables (STANFORD_PARSER & STANFORD_MODELS) to point to this 'jars' folder. I'm using Linux, so if you use Windows please use something like: C://folder//jars.
Open the stanford-parser-3.x.x-models.jar using an Archive manager (7zip).
Browse inside the jar file; edu/stanford/nlp/models/lexparser. Again, extract the file called 'englishPCFG.ser.gz'. Remember the location where you extract this ser.gz file.
When creating a StanfordParser instance, you can provide the model path as parameter. This is the complete path to the model, in our case /location/of/englishPCFG.ser.gz.
Try my example! (don't forget the change the jar paths and change the model path to the ser.gz location)
The Stanford Core NLP software page has a list of python wrappers:
http://nlp.stanford.edu/software/corenlp.shtml#Extensions
You can use the Stanford Parsers output to create a Tree in nltk (nltk.tree.Tree).
Assuming the stanford parser gives you a file in which there is exactly one parse tree for every sentence. Then this example works, though it might not look very pythonic:
I took many hours and finally found a simple solution for Windows users. Basically its summarized version of an existing answer by alvas, but made easy to follow(hopefully) for those who are new to stanford NLP and are Window users.
1) Download the module you want to use, such as NER, POS etc. In my case i wanted to use NER, so i downloaded the module from http://nlp.stanford.edu/software/stanford-ner-2015-04-20.zip
2) Unzip the file.
3) Set the environment variables(classpath and stanford_modules) from the unzipped folder.
4) set the environment variables for JAVA, as in where you have JAVA installed. for me it was below
5) import the module you want
6) call the pretrained model which is present in classifier folder in the unzipped folder. add ".gz" in the end for file extension. for me the model i wanted to use was
english.all.3class.distsim.crf.ser
7) Now execute the parser!! and we are done!!
Note that this answer applies to NLTK v 3.0, and not to more recent versions.
Here is the windows version of alvas's answer
NOTES:
In
lexparser.bat
you need to change all the paths into absolute path to avoid java errors such as "class not found"I strongly recommend you to apply this method under windows since I Tried several answers on the page and all the methods communicates python with Java fails.
wish to hear from you if you succeed on windows and wish you can tell me how you overcome all these problems.
search python wrapper for stanford coreNLP to get the python version
There is python interface for stanford parser
http://projects.csail.mit.edu/spatial/Stanford_Parser