I have been reading an article on cryptography, and I thought to myself "How does a 32bit computer actually perform operations on a 512bit value, or even a 64 bit value?"
Would anyone be able to point me in the right direction? Maybe I am at a loss of how to properly express what I want to know, but Google searches haven't been very helpful in figuring this out.
Thanks!
This is an expansion of GregS's comment.
Suppose I know all the one hundred single-digit * single-digit
multiplications (from 0 * 0 = 0
up to 9 * 9 = 81
), and someone asks me to calculate 561 * 845
. I could say, "sorry I can't multiply numbers that large"; or, I could remember my childhood education and do this:
561
845 *
----------
2805
2244
4488 +
==========
474045
which requires only that I can do, in any given step, a multiplication within my known range, or an addition (with carry).
Now, suppose that instead of decimal digits, each of the symbols above was instead a 32 bit word; and instead of me, we had a processor that can multiply 32 bit words to a 64 bit result, and add (With carry) 32 bit words. Voila, we have a system for doing arbitrarily large binary multiplications.
32-bits at a time. There are flags to indicate carry, overflow etc to allow multi-word arithmetic by means of repeated operations.
A 32 bit processor can split larger numbers out onto more than one register although it is slower than performing operations on a single 32 bit register. For addition/subtraction it simply performs arithmetic starting from the least significant register and then carries the status bits over to the next significant register. It can get a bit more complex with multiplication/division but the main downside is performance.
See http://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic for more information.
Also this question: How do programming languages handle huge number arithmetic
The operations are performed in software or use specialized hardware (for encryption). For example libraries, see GMP and MPFR.
In a nut shell, kind of what Mitch said where it just breaks it down bit by bit. It may take your processor some time to process it, but this is why we now have multicore processors to speed up this process. It is also why most OS come in 32-bit and 64-bit versions and eventually 128. If you are interested in this, take some classes on assembly and machine languages