There are standard libraries of shader functions, such as for Cg. But are there resources which tell you how long each takes... I'm thinking similar to how you used to be able to look up how many cycles each ASM op would take.
相关问题
- Is GLFW designed to use without LWJGL (in java)?
- Faster loop: foreach vs some (performance of jsper
- glDrawElements only draws half a quad
- Why wrapping a function into a lambda potentially
- Ado.net performance:What does SNIReadSync do?
相关文章
- Converting glm::lookat matrix to quaternion and ba
- Algorithm for partially filling a polygonal mesh
- Robust polygon normal calculation
- DOM penalty of using html attributes
- Which is faster, pointer access or reference acces
- Behavior of uniforms after glUseProgram() and spee
- Django is sooo slow? errno 32 broken pipe? dcramer
- Keep constant number of visible circles in 3D anim
There are no reliable resources that will tell you how long various standard shader functions take. Not even for a particular piece of hardware.
The reason for this has to do with instruction scheduling and the way modern shader architectures work. Take a simple
sin
function. Let's say that the hardware has a special hardware to compute the sine of a value, so it's not manually using a Tailor series or something. However, let's also say that it takes a sequence of 4 opcodes to actually compute it. Therefore,sin
would take "4 cycles".However, all of those opcodes are scalar operations. Therefore, while they're going on, you could in fact have some 3-vector dot-products, or in the case of some hardware, 4-vector dot-products going on at the same time, on the same processor. Therefore, if the hardware has 4-vector dot-products with scalar operations, the number of cycles it takes to execute a
sin
and a matrix-vector multiply is... still 4.So how much did the
sin
operation cost? If you take out the matrix multiply, nothing gets faster. If you take out thesin
, nothing still gets faster. How much does it cost? You can't say, because the cost of a single operation is irrelevant; the only measurable quantity is the cost of the shader itself.Ultimately, all you can do is try to build your shader reasonably and see what the performance is. Unless you have low-level debugging tools to deprocess the underlying shader assembly (and no, DX assembly isn't good enough), that's really the best you can do.