I have an instance running on the Compute Engine which uses Torch to predict objects in images. I wanted to make a simple web interface using which a user can upload an image, the image is sent to the server(compute engine), the objects are predicted and the list is returned back to the user.
In my compute engine (Ubuntu 14.04) this line of code is used to predict objects in images. (All the other setup has been already done in the compute engine.)
th eval.lua -model /path/to/model -image_folder /path/to/image/directory -num_images 10
I want to call this line from the web app and pass the image to the image folder and get back the list of objects. How do I go about it?
In past projects I have discussed and used different approaches to communicate between Google App Engine and Google Compute Engine. Generally speaking the two usual suspects are:
- Orchestration from App Engine: In this approach the App Engine application is the active part and sends requests to a service on the compute instance. This is what Igor Artamonov already described in his comment. We used a tomcat instance on the compute instances which ran a full rest api to invoke commands on the instance. Possible helpers:
- When using the Google Compute API from App Engine you can get the external IP address of compute instances. So you know where your requests will have to go.
- Polling from the compute instance: Since you know the app id of your App Engine application you could code a simple loop on your compute instances that request new jobs from the app engine application. I have used this approach in combination with an orchestration that will send a shutdown command to instances that are no longer required, therefor reducing the polling load on app engine. If new jobs were created I would start a new compute instance which would then poll until it receives a shutdown command again.
Both approaches work well. If you use the Compute API and know the IPs of your compute instances you can restrict your polling endpoints and command invoke requests to these IPs for basic security.
I would try to avoid too much polling though since, well let me give you a quote:
Actively polling is the poor man's solution to kicking off a workflow process. (javaworld.com)
But if you shut down your compute instances when they are finished with their workload i don't see a good reason why you shouldn't use polling. If you don't and you increase the number of compute instances to a couple instances you will have load on your App Engine application without achieving anything but cost.