I have been told at some stage at university (and have subsequently read in upteen places) that using instanceof
should only be used as a 'last resort'. With this in mind, is anyone able to tell be if the following code I have is a last resort. I have had a look around on stack overflow but cannot quite find a similar scenario - perhaps I have missed it?
private void allocateUITweenManager() {
for(GameObject go:mGameObjects){
if (go instanceof GameGroup) ((GameGroup) go).setUITweenManager(mUITweenManager);
}
}
where
mGameObjects
is an array, only some of which areGameGroup
typeGameGroup
is a subclass of abstract classGameObject
.GameGroup
uses interfaceUITweenable
which has methodsetUITweenManager()
GameObject
does not use interfaceUITweenable
I suppose I could equally (and probably should) replace GameGroup
in my code above with UITweenable
- I would be asking the same question.
Is there another way of doing this that avoids the instanceof
? This code cannot fail, as such (I think, right?), but given the bad press instanceof
seems to get, have I committed some cardinal sin of OOP somewhere along the line that has me using instanceof
here?
Thanks in advance!
I have another suggestion of a way to avoid
instanceof
.Unless you are using a generic factory, at the moment when you create a
GameObject
you know what concrete type it is. So what you can do is pass anyGameGroup
s you create an observable object, and allow them to add listeners to it. It would work like this:What draws me to this solution is:
Because a
UITweenManagerInformer
is a constructor parameter toGameGoup
, you cannot forget to pass it one, whereas with an instance method you might forget to call it.It makes intuitive sense to me that information that an object needs (like the way a
GameGroup
needs knowledge of the currentUITweenManager
) should be passed as a constructor parameter -- I like to think of these as prerequisites for an object existing. If you don't have knowledge of the currentUITweenManager
, you shouldn't create aGameGroup
, and this solution enforces that.instanceof
is never used.You could declare
setUITweenManager
inGameObject
with an implementation that does nothing.You could create an method that returns an iterator for all
UITweenable
instances in array ofGameObject
instances.And there are other approaches that effectively hide the dispatching within some abstraction; e.g. the Visitor or Adapter patterns.
Not really (IMO).
The worst problem with
instanceof
is when you start using it to test for implementation classes. And the reason that is particularly bad is that it makes it hard to add extra classes, etcetera. Here theinstanceof UITweenable
stuff doesn't seem to introduce that problem, becauseUITweenable
seems to be more fundamental to the design.When you make these sorts of judgement, it is best to understand the reasons why the (supposedly) bad construct or usage is claimed to be bad. Then you look at you specific use-case and make up whether these reasons apply, and whether the alternatively you are looking at is really better in your use-case.
The reason why
instanceof
is discouraged is because in OOP we should not examine object's types from outside. Instead, the idiomatic way is to let object themselves act using overriden methods. In your case, one possible solution could be to defineboolean setUITweenManager(...)
onGameObject
and let it returntrue
if setting the manager was possible for a particular object. However if this pattern occurs in many places, the top-level classes can get quite polluted. Therefore sometimesinstanceof
is "lesser evil".The problem with this OPP approach is that each object must "know" all its possible use cases. If you need a new feature that works on your class hierarchy, you have to add it to the classes themselves, you can't have it somewhere separate, like in a different module. This can be solved in a general way using the visitor pattern, as others suggested. The visitor pattern describes the most general way to examine objects, and becomes even more useful when combined with polymorphism.
Note that other languages (in particular functional languages) use a different principle. Instead of letting objects "know" how they perform every possible action, they declare data types that have no methods on their own. Instead, code that uses them examines how they were constructed using pattern matching on algebraic data types. As far as I know, the closest language to Java that has pattern matching is Scala. There is an interesting paper about how Scala implements pattern matching, which compares several possible approaches: Matching Objects With Patterns. Burak Emir, Martin Odersky, and John Williams.
In summary: In OOP you can easily modify data types (like add subclasses), but adding new functions (methods) requires making changes to many classes. With ADT it's easy to add new functions, but modifying data types requires modifying many functions.