Why would you make a whole class sealed/final?

2019-04-22 05:11发布

问题:

I understand the motivation for making individual methods of a class sealed/final, but what purpose does completely prohibiting inheritance from the class serve? While allowing overriding of certain methods can cause bad things to happen, I can't see how allowing inheritance from your class purely to add behavior to it without overriding existing behavior could ever be a bad thing. If you really want to disallow overriding anything in your class, why not just make every method final but still allow inheritance for the purpose of adding behavior?

回答1:

An organisation/software dev team might want to enforce certain coding standards. For example, in order to improve readability, they might only want attribute X of class Y to be modified with method Z and nothing else. If the class were not final, some developer might extend class Y and add method W, which could modify X in a different way and cause confusion and delay when it came to comprehending the code written.



回答2:

There are some good discussions about this in the "Why String is final in Java" question: Why is String class declared final in Java?

The basic summary in that case is, String is immutable, and if it could be subclassed, the subclass might make it mutable.



回答3:

More generally, a class can be made final in order to preserve the invariants that make it work. I highly recommend Joshua Bloch's "Effective Java" for an excellent discussion on this and related issues.



回答4:

Think about it this way: You spent your last 3 months working on a base class for your own framework that others are going to use, and the whole consistency of the framework is based on the code written in this class, modifying this class may cause bugs, errors, and misbehavior of your framework, how do you prevent your great programmers from extending it and override one of its methods?

In addition to this there are class that are supposed to be immutable, such as the java.lang.String class in Java, extending it and changing its behavior could make a big mess!

Finally sometimes setting a class or method as final will increase performance, but I never tested it, and don't see any reason on earth to make a class/method final for a very very tiny performance gain that is equal to zero compared to the time needed to do other stuff in the program.



回答5:

I think it depends.

If I have an internal application I wouldn't use the final on class level by default. Code just gets too cluttered. In some circumstances final modifier is even really annoying, if I want to refactor safely (see the programming by difference term). In some cases the 'default-final-mania' made big problems when I tried to refactor.

On the other hand if I design an api/framework where code is more extension-driven I would use final for classes 'more aggressively'. Here wrong extension can be a big problem and the 'final' modifier is a safe way to avoid such programming mistakes.

But I am just opposed to say use 'final' by default like many do. It has often more drawbacks as advantages. To me defaults should be 'sensible and more common setting'.



回答6:

As far as I know, it's about maintining invariants.

In an analogous situation, for example, you could let your attributes public, but then you'd lose control over their values because clients would be able to change them directly, breaking some properties you'd like to maintain. So, the same way you prohibit clients from accessing some (most actually) fields directly in order to maintain some invariant, you also make your class/method final to maintain some invariant about it.

A common example I see has to do with immutability, which can be broken if a class isn't final.

The problem is not really when a user extends a class and uses it only in his system, living with his own "mess" isolated from everything else. The real problem has to do with the fact that classes/methods/attributes/objects/... don't exist in isolation.

Say you have a collection of classes CC1 which colaborates towards some goal, and one of those classes CC1_A should "produce" only immutable objects (or anyother invariant you'd like). Then you go ahead and extend CC1_A, making CC2_AX (extended A in a second class collection). What you can do now is use CC1 stuff passing a CC2_AX where a CC1_A is required. However, CC1 was built assuming CC1_A objects are immutable, but your CC2_AX objects aren't (assume they aren't just for the sake of the example). This can lead to serious trouble in your program. Such issue can happen even internally in a class. For example, methods of CC1_A are probably going to assume CC1_A objects are immutable, so methods of CC2_AX which came from CC1_A can potentially fail because CC2_AX breaks the immutability property.

In our systems, today, we have many classes and a lot of what we do goes implicitly. So we don't always think of the invariants we'd like to maintain. This doesn't make things easier.

By the way, some other invariants include things like:

  • The sort method should be stable.
  • The <such> method uses the <other such> algorithm.
  • This <class> List implementation should use simply linked list implementation as described <some place>.

It goes on and on. Some classes are useful only because they maintain stronger invariants (for example, the use of a particular algorithm), and extension allows people to easily break that sort of thing. And a reason why this is all a problem has to do with these things not living in isolation.