"sealed" keyword in C#, Final in Java.
As i almost never create any diagram and i only use classes already done (from frameworks) i still after years don't know why would someone "lock" a class so it will never be extended/inherit.
It is useful? Are there any harms making all classes being possible to be extended from inheritance throwing away the possibility to "seal it"?
Sorry to ask this in 2012 where OOP is trivial, but i would like to receive a good explanation and/or a good source for reading! Because for me is useless and i can't believe it is just a simply concept.
But everywhere i search the answer is the same: "mark a class preventing it to be inherited from others.".
Security - inheriting class might have access to internal parts of the base class, breaking encapsulation
Preserving contract - inheriting class might break contract provided by base class. See Liskov Substitution Principle
Immutability - special case of 2. - inheriting class can introduce mutable fields to immutable classes
Performance - the virtual machine can aggressively optimize such classes, e.g. assume methods are never overridden, avoiding virtual calls
Simplicity - implementing methods like
equals()
is much simpler without bothering about inheritanceThe reason is that if you allow subclassing, and place no restrictions on which methods a subclass can override, updates to the base class that preserve its behavior can still break subclasses. In other words, it's unsafe to inherit from a class that wasn't specifically designed to be extended (by designating specific methods as overridable and making all others
final
.) This is called the Fragile Base Class Problem - see this paper for an example, and this paper for a more thorough analysis of the issue.If the base class wasn't designed for inheritance and it's part of a public API (maybe a library), now you're in serious trouble. You have no control over who's subclassing your code, and you have no way of knowing whether a change will be safe for the subclasses just by looking at the base class. The bottom line is that you either design for unlimited inheritance or disallow it completely.
Note that it's possible to have a finite number of subclasses. That can be achieved with a private constructor and inner classes:
The inner classes have access to the private constructor, but top-level classes don't, so you can only subclass it from the inside. This allows you to express the notion that "a value of type Base can be a SubA OR SubB OR SubC". There is no danger here, because Base will generally be empty (you're not really inheriting anything from Base) and all the subclasses are under your control.
You might think this is a code smell since you're aware of the types of the subclasses. But you should realize that this alternative way of implementing abstractions is complementary to interfaces. When you have a finite number of subclasses, it's easy to add new functions (handle a fixed number of subclasses), but hard to add new types (every existing method needs to be updated to handle the new subclass). When you use an interface or allow unlimited subclassing, it's easy to add a new type (implement a fixed number of methods) but it's hard to add new methods (you need to update every class). One's strengths is the other's weakness. For a more detailed discussion on the topic, see On Understanding Data Abstraction, Revisited. EDIT: My mistake; the paper I linked talks about a different duality (ADTs vs objects/interfaces). What I was actually thinking of is the Expression Problem.
There's less severe reasons to avoid inheritance as well. Suppose you start with class A, then implement some new feature in subclass B, and later on add another feature in subclass C. Then you realize you need both features, but you can't create a subclass BC that extends both B and C. It's possible to get bitten by this even if you design for inheritance and prevent the fragile base class problem. Other than the pattern I showed above, most uses of inheritance are better replaced with composition - for example, using the Strategy Pattern (or simply using high-order functions if your language supports it.)