So far, in my research I have seen that it is unwise to set AllowUnsafeUpdates on GET request operation to avoid cross site scripting. But, if it is required to allow this, what is the proper way to handle the situation to mitigate any exposure?
Here is my best first guess on a reliable pattern if you absolutely need to allow web or site updates on a GET request.
Best Practice?
protected override void OnLoad(System.EventArgs e)
{
if(Request.HttpMethod == "POST")
{
SPUtility.ValidateFormDigest();
// will automatically set AllowSafeUpdates to true
}
// If not a POST then AllowUnsafeUpdates should be used only
// at the point of update and reset immediately after finished
// NOTE: Is this true? How is cross-site scripting used on GET
// and what mitigates the vulnerability?
}
// Point of item update
using(SPSite site = new SPSite(SPContext.Current.Site.Url, SPContext.Current.Site.SystemAccount.UserToken))
{
using (SPWeb web = site.RootWeb)
{
bool allowUpdates = web.AllowUnsafeUpdates; //store original value
web.AllowUnsafeUpdates = true;
//... Do something and call Update() ...
web.AllowUnsafeUpdates = allowUpdates; //restore original value
}
}
Feedback on the best pattern is appreciated.
If you're performing any operations which modify something, then anyone that can convince the user to click on a link can perform that operation. For instance, let's assume that you have a GET request to a page which lets the user add an administrator to a site, and the user clicks a link to a page which does a Response.Redirect("http://yourserver/_layouts/admin.aspx?operation=addAdministrator&username=attackerNameHere").
While normally a POST does not offer much protection against this (nothing will stop someone from having a <form method="post" action="http://yourserver/_layouts/admin.aspx">), SharePoint has a concept of form digests, which contain information about the previous request that is generating the post back (including the user's name). This reduces the footprint for this kind of attack significantly.
The only time that it is not a security issue to AllowUnsafeUpdates on a GET is if you're not taking input from the user. For instance, if you have a web part which also logs visits to a list, then there's no security vulnerability exposed.
Edit: If you are going to use AllowUnsafeUpdates, there's no need to reset it to its previous value. It does not get persisted. It's just something you need to set on an SPWeb object before performing updates from a GET (or other cases)
Another clean way to implement would be to use a combination of extension methods and anonymous delegates as such:
Using the above extension method, you can then perform your "unsafe update" action as follows:
I would slightly modify Trent's delegate to accept the web to update:
And then extend HttpContext to encapsulate verification of the form digest, with an option to elevate using the technique described here:
Usage:
I use a wrapper class for handling most manipulation of SPWeb objects. This helps me remember to close the web, and it eases the problems of unsafeupdates setting. It is a bit bloated, as I have patched on new constructors and members. but, then again; so is the SPWeb class.
Usage:
The class definition:
For AllowUnsafeUpdates, I follow this process:
Not so sure it is worth remembering the previous value of allow unsafe updates.
I would want to wrap the call around the minimum possible amount of code, so that nested calls to it would not occur.
Then you can just turn it to false afterwards.