In the many Python applications I've created, I often create simple modules containing nothing but constants to be used as config files. Additionally, because the config file is actually a Python code file, I can add simple logic for changing variables depending on a debug level, etc.
While this works great for internal applications, I'd be wary about releasing such applications into the wild for fear of someone either accidentally, or maliciously, adding destructive code to the file. The same would hold true for using Python as an embedded scripting language.
Is there a subset of Python that is deemed "safe" for embedding? I realize how safe it can be considered is fairly subjective. However, Java Applets and Flash both have their security sandbox well defined. I'm wondering if there's a version of Python that has similar rules?
EDIT: I'm asking not so much because of the config file approach, but because I'm interested in implementing some scripting/plugin mechanisms into a newer app and don't want a plugin or script to be able to, say, delete files. That goes beyond the scope of what the application should be able to do.
tinypy (tinypy.org) was made to be a small, embed-able Python subset written in the style of Lua. And as lua has a manner to create a sandbox, I estimate that tinypy could be hacked along the same vein. Since tinypy's code base is so small, it's pretty easy to learn and figure out how to change things around to fit your needs.
It's a little hard to understand what you're trying to do -- not enough details.
Are you hosting the native app and allowing the users to write plugins? Consider using an OS-level solution by running the Python application as a separate runtime process inside a jail/chroot/similar, and communicating over sockets.
Are you expecting your customers to host the native app and let "untrusted parties" write plugins? Is there a reason the solution above won't work? (E.g., the customer would like to deploy on weird OSs without such options...)
Are you expecting the same people to host the native app and the "untrusted script" and want to protect them from themselves? In the sense of protecting them from writing "os.remove" and having it do what they wrote? Can you explain why?
Note that sandboxing alone is often not enough without stricter constraints (maximum CPU cycles, maximum memory, memory ownership issues...)? What kind of maliciousness do you want to stop? Note that here, too, OSs have wonderful capabilities (priorities, killing processes, ulimits) that not all sandboxing environments replicate -- and are certainly less security-tested than the things in OSs. (I'd trust Linux not to have breakable ulimits before I'd trust PyPy not to enable a malicious coder to take up unbounded amounts of memory, simply because Linux has been attacked in the wild more.)
For some discussion on issues previously met with the
rexec
module:These came from Restricted Execution HOWTO.
I don't really know much about exactly what security capabilities you get within the Java Virtual Machine or .NET runtimes, but you might want to consider if running your python code with Jython or IronPython might allow you to get added security.
No there is no production ready subset of Python that is "safe". Python has had a few sand box modules which were deprecated due to deficiencies.
Your best bet is to either create your own parser, or isolate the python process with syscall hooks and a jailed account.
Some people might point you to PyPy, but it is slow and unfinished.
This sounds like what you want: Reviving Python restricted mode.
The Python interpreter has a built-in "restricted" mode, enabled by changing the
__builtins__
magic variable. The article Paving the Way to Securing the Python Interpreter explains the trick in more detail. Note that to work completely, it needs a patch to the Python intrepreter; I do not know if it has already been applied.For a pure python proof-of-concept, see his previous post A Challenge To Break Python Security.