When doing calculations on very large numbers where integral data types such as double or int64 falls short, a separate class to handle such large numbers may be needed.
Does anyone care to offer an efficient algorithm on how best to do this?
When doing calculations on very large numbers where integral data types such as double or int64 falls short, a separate class to handle such large numbers may be needed.
Does anyone care to offer an efficient algorithm on how best to do this?
There are 2 solutions to your problem:
Easy way: Use an external library such as 'The GNU MP Bignum Library and forget about implementation details.
Hard way: Design your own class/structure containing multiple higher order datatypes like double or int64 variables and define basic math operations for them using operator overloading (in C++) or via methods named add, subtract, multiply, shift, etc. (in JAVA and other OO languages).
Let me know if you need any further help. I have done this a couple of times in the past.
In C# 4.0 use the BigInteger type
You're asking about arbitrary-precision arithmetic, a subject on which books have been written. If you just want a simple and fairly efficient BigNum library for C#, you might want to check out IntX.
Using the built-in features of a language work for me.
Java has BigInteger
and BigDecimal
, and Python automagicaly switches to an object similar to Java's if a number gets out of the range of an integer
or whatnot.
As for other languages though, I have no idea.
I hate re-inventing the wheel.
Doing your own BigNum library is complicated, so i'd say like jjnguy. Use whatever your language offers as libraries.
In .net, reference the VisualJ dll as they contain the BigInteger and BigDecimal classes. You should however be aware of some limitations of these libraries, like the lack of a square root method, for example.