I'm working on a webapp to teach programming concepts. Webpages have some text about a programming concept, then let the user type in javascript code into a text editor window to try to answer a programming problem. When the user clicks "submit", I analyse the text they've typed to see if they have solved the problem. For example, I ask them to "write a function named f
that adds three to its argument".
Here's what I'm doing to analyse the user's text:
- Run JSLint on the text with strict settings, in particular without assuming browser or console functions.
- If there are any errors, show the errors and stop.
eval(usertext);
- Loop through conditions for passing the assignment,
eval(condition)
. An example condition is "f(1)===4"
. Conditions come from trusted source.
- Show passing/failing conditions.
My questions: is this good enough to prevent security problems? What else can I do to be paranoid? Is there a better way to do what I want?
In case it is relevant my application is on Google App Engine with Python backend, uses JQuery, has individual user accounts.
So from what I can tell if you are eval'ing a user's input only for them, this isn't a security problem. Only if their input is eval'd for other users you have a problem.
Eval'ing a user's input is no worse than them viewing source, looking at HTTP headers, using Firebug to inspect JavaScript objects, etc. They already have access to everything.
That being said if you do need to secure their code, check out Google Caja http://code.google.com/p/google-caja/
This is a trick question. There is no secure way to eval()
user's code on your website.
It can't be done. Browsers offer no API to web pages to restrict what sort of code can be executed within a given context.
However, that might not matter. If you don't use any cookies whatsoever on your website, then executing arbitrary Javascript may not be a problem. After all, if there is no concept of authentication, then there's no problem with forging requests. Additionally, if you can confirm that the user meant to execute the script he/she sent, then you should also be protected from attackers, e.g., if you will only run script typed onto the page and never script submitted via GET or POST data, or if you include some kind of unique token with those requests to confirm that the request originated with your website.
Still, the answer to the core question is that it pretty much is that it can't be done, and that user input can never be trusted. Sorry :/
Not clear if the eval()
occurs on client or server side. For client side:
I think it's possible to eval safely in an well configured iframe (https://www.html5rocks.com/en/tutorials/security/sandboxed-iframes/)
This should be 100% safe, but needs a couple of libraries and has some limitations (no es6 support): https://github.com/NeilFraser/JS-Interpreter
There are lighter alternatives but not 100% safe like https://github.com/commenthol/safer-eval.
Alternatively, I think something similar can be implemented manually wrapping code in a with statement, overriding this
, globals and arguments. Although it will never be 100% safe maybe is viable in your case.