Do the access levels and modifiers (private, seale

2019-02-16 12:55发布

问题:

I've seen that you can manipulate private and internal members using reflection. I've also seen it said that a 'sealed' class is more secure that one that isn't.

Are the modifiers "public, protected, internal, private, abstract, sealed, readonly" anything more than a gentleman's agreement about design and API use, that can be broken as long as you have access to reflection? And if a hacker is already is running code that calls your API, the game is already lost, right?

Is the following anymore secure than any other class?

//private class
sealed class User
{
    private string _secret = "shazam";
    public readonly decimal YourSalary;
    public string YourOffice{get;};
    private DoPrivilegedAction()
    {
    }
}

回答1:

First, to answer your question: The security system is designed to protect GOOD USERS from BAD CODE; it is explicitly not designed to protect GOOD CODE from BAD USERS. Your access restrictions mitigate attacks on your users by partially trusted hostile code. They do not mitigate attacks on your code from hostile users. If the threat is hostile users getting your code, then you have a big problem. The security system does not mitigate that threat at all.

Second, to address some of the previous answers: understanding the full relationship between reflection and security requires careful attention to detail and a good understanding of the details of the CAS system. The previously posted answers which state that there is no connection between security and access because of reflection are misleading and wrong.

Yes, reflection allows you to override "visibility" restrictions (sometimes). That does not imply that there is no connection between access and security. The connection is that the right to use reflection to override access restrictions is deeply connected to the CAS system in multiple ways.

First off, in order to do so arbitrarily, code must be granted private reflection permission by the CAS system. This is typically only granted to fully trusted code, which, after all, could already do anything.

Second, in the new .NET security model, suppose assembly A is granted a superset of the grant set of assembly B by the CAS system. In this scenario, code in assembly A is allowed to use reflection to observe B's internals.

Third, things get really quite complicated when you throw in dynamically generated code into the mix. An explanation of how "Skip Visibility" vs "Restricted Skip Visibility" works, and how they change the interactions between reflection, access control, and the security system in scenarios where code is being spit at runtime would take me more time and space than I have available. See Shawn Farkas's blog if you need details.



回答2:

Access modifiers aren't about security, but good design. Proper access levels for classes and methods drives/enforces good design principles. Reflection should, ideally, only be used when the convenience of using it provides more utility than the cost of violating (if there is one) best design practices. Sealing classes only serves the purpose of preventing developers from extending your class and "breaking" it's functionality. There are different opinions on the utility of sealing classes, but since I do TDD and it's hard to mock a sealed class, I avoid it as much as possible.

If you want security, you need to follow coding practices that prevent the bad guys from getting in and/or protect confidential information from inspection even if a break in occurs. Intrusion prevention, intrusion detection, encryption, auditing, etc. are some of the tools that you need to employ to secure your application. Setting up restrictive access modifiers and sealing classes has little to do with application security, IMO.



回答3:

No. These have nothing to do with security. Reflection breaks them all.



回答4:

Regarding the comments about reflection and security - consider that there are many internal types + members in mscorlib.dll that call into native Windows functions and can potentially lead to badness if a malicious application uses reflection to call into them. This isn't necessarily a problem since untrusted applications normally aren't granted these permissions by the runtime. This (and a few declarative security checks) is how the mscorlib.dll binary can expose its types to all manner of trusted and untrusted code, yet the untrusted code can't get around the public API.

This is really just scratching the surface of the reflection + security issue, but hopefully it's enough information to lead you down the right path.



回答5:

I always try to lock things down to the minimal access required. Like tvanfosson stated, it's really about design more than security.

For example, I'll make an interface public, and my implementations internal, and then a public factory class/methods to get the implementations. This pretty much forces the consumers to always type it as the interface, and not the implementation.

That being said, a developer could use reflection to actually instantiate a new instance of an implementation type. There's nothing stopping him/her. However, I can rest knowing that I made it at least somewhat difficult to violate the design.