I keep hitting this problem when building game engines where my classes want to look like this:
interface Entity {
draw();
}
class World {
draw() {
for (e in entities)
e.draw();
}
}
That's just pseudo-code to show roughly how the drawing happens. Each entity subclass implements its own drawing. The world loops through all the entities in no particular order and tells them to draw themselves one by one.
But with shader based graphics, this tends to be horribly inefficient or even infeasible. Each entity type is probably going to have its own shader program. To minimize program changes, all entities of each particular type need to be drawn together. Simple types of entities, like particles, may also want to aggregate their drawing in other ways, like sharing one big vertex array. And it gets really hairy with blending and such where some entity types need to be rendered at certain times relative to others, or even at multiple times for different passes.
What I normally end up with is some sort of renderer singleton for each entity class that keeps a list of all instances and draws them all at once. That's not so bad since it separates the drawing from the game logic. But the renderer needs to figure out which subset of entities to draw and it needs access to multiple different parts of the graphics pipeline. This is where my object model tends to get messy, with lots of duplicate code, tight coupling, and other bad things.
So my question is: what is a good architecture for this kind of game drawing that is efficient, versatile, and modular?
Use a two stages approach: First loop through all entities, but instead of drawing let them insert references to themself into a (the) drawing batch list. Then sort the list by OpenGL state and shader use; after sorting insert state changer objects at every state transistion.
Finally iterate through the list executing the drawing routine of each object referenced in the list.
This is not an easy question to answer, since there are many ways to deal with the problem. A good idea is to look into some Game/Rendering engines and see how this is handled there. A good starting point would be Ogre, since its well documented and open source.
As far as I know, it separates the vertex data from the material components (shaders) through the built-in material scripts. The renderer itself knows what mesh is to be drawn in what order and with what shader (and its passes).
I know this answer is a bit vague, but I hope I could give you a useful hint.