Why in LISP , there is no limitation for number?

2019-02-21 16:52发布

I can even calculate (expt 32768 32768) and I got:

476170470581645852036305042887575891541065808607552399123930385521914333389668342420684974786564569494856176035326322058077805659331026192708460314150258592864177116725943603718461857357598351152301645904403697613233287231227125684710820209725157101726931323469678542580656697935045997268352998638215525166389437335543602135433229604645318478604952148193555853611059596230656

标签: numbers lisp
3条回答
我欲成王,谁敢阻挡
2楼-- · 2019-02-21 17:23

Lisp automatically switches math to use a bignum package when it sees this kind of thing. But there is a limitation. Make your numbers big enough, and you may require more bits to represent it than there are atoms in the known universe. Then your system memory will probably be exhausted. :)

查看更多
冷血范
3楼-- · 2019-02-21 17:29

You may find some clues by turning the question around: Why have a limitation on the size of numbers?

There are some practical reasons for limiting the size of numbers. The representation of numbers in certain other programming languages are tied closely to the hardware architecture, with the size of numbers limited by the number of bits in the processor's registers.

Fortunately, in Lisp, you can usually think on a more abstract level, liberating the programmer from such low level details. But such arbitrary-precision arithmetic is typically slower than limiting numbers to fit within the processor registers.

PS: Also check out how elegantly Lisp handles fractions. Not turning fractions into floating point numbers allow precise arithmetics. For example: (+ 1/3 2/7) => 13/21

查看更多
趁早两清
4楼-- · 2019-02-21 17:37

Here is another perspective.

One reason for wanting arbitrary precision integers is that Lisp implementations which have efficient, unboxed integers, but do not have arbitrary precision math, are crippled compared to some other languages on the same platform.

Emacs Lisp packs integers into a single word with a type tag, and because it doesn't have bignum arithmetic (or maybe does now? But didn't have at one point, in any case), integers are/were limited to something like 28 bits (on a 32 bit platform). This is crippled compared to C.

32 bits is crippled, but 28 is extra crippled. It makes interoperability with other programs hard. For instance reading binary structures that contain 32 bit integers.

For example, the GNU Emacs newsreader broke (on 32 bit boxes) when connecting to servers where article numbers overflowed 28 bits. So it's worth it to have bignums just to get to 32 bits.

This is not why bignums were introduced into Lisp, of course. According to the paper The Evolution of Lisp bignums were first added to MacLisp in 1970 or 1971, because some users doing symbolic math with Macsyma needed it.

But if you're implementing a Lisp with type-tagged integers, you will feel the pain and want to implement bignums just to get around the bits you've lost to the type tag.

You could solve this problem by having fixed 32 bit integers which are heaped, and unboxed ones that are 31, 30, ... 28 (whatever your tag size is). But that is very little payoff for the complexity. With that scheme you already have to handle all the combinations in your math routines: unboxed - unboxed, unboxed - boxed, boxed - unboxed, etc. With a bunch more effort, you can do bignums.

Go bignum or go home, know what I mean? :)

Think bignum, be bignum!

It takes a bignum man to admit he is fixnum.

Walk (the code, expanding macros) softly, and carry a big num!

The more bignum they are, the harder they expt.

查看更多
登录 后发表回答