I've trained a POS tagger and neural dependency parser with Stanford corenlp. I can get them to work via command line, and now would like to access them via a server.
However, the documentation for the server doesn't say anything about using custom models. I checked the code and didn't find any obvious way of supplying a configuration file.
Any idea how to do this? I don't need all annotators, just the ones I trained.
Yes, the server should (in theory) support all the functionality of the regular pipeline. The properties
GET parameter is translated into the Properties
object you would normally pass into StanfordCoreNLP
. Therefore, if you'd like the server to load a custom model, you can just call it via, e.g.:
wget \
--post-data 'the quick brown fox jumped over the lazy dog' \
'localhost:9000/?properties={"parse.model": "/path/to/model/on/server/computer", "annotators": "tokenize,ssplit,pos", "outputFormat": "json"}' -O -
Note that the server won't garbage-collect this model afterwards though, so if you load too many models there's a good chance you'll run into out-of-memory errors...