I just don't know that, is there any technical reason for that? Is it more difficult to implement a compiler for a language with weak typing? What is it?
相关问题
- How do shader compilers work?
- C++ Builder - Difference between Lib and Res
- Why does constructing std::string(0) not emit a co
- Is divide by zero an error or an exception?
- How do you parse a dangling else?
相关文章
- How to force Delphi compiler to display all hints
- Xcode - How to change application/project name : w
- What other neat tricks does the SpecialNameAttribu
- Are there any reasons not to use “this” (“Self”, “
- What is the purpose of “-Wa,option” in GCC? [close
- Why does the C++ compiler give errors after lines
- Can a compiler inline a virtual function if I use
- Which x86 C++ compilers are multithreaded by itsel
I'm guessing that languages with dynamic (duck) typing employ lazy evaluation, which is favored by lazy programmers, and lazy programmers don't like to write compilers ;-)
Languages with weak typing can be compiled, for example, Perl5 and most versions of Lisp are compiled languages. However, the performance benefits of compiling are often lost because much of the work that the language runtime has to perform is to do with determining what type a dynamic variable really has at a particular time.
Take for example the following code in Perl:
It is obviously pretty difficult for the compiler to determine what type $x really has at a given point in time. At the time of the print statement, work needs to be done to figure that out. In a statically typed language, the type is fully known so performance at runtime can be increased.
There are basically two reasons to use static typing over duck typing:
If you have an interpreted language, then there's no compile time for static error checking to take place. There goes one advantage. Furthermore, if you already have the overhead of the interpreter, then the language is already not going to be used for anything performance critical, so the performance argument becomes irrelevant. This explains why statically typed interpreted languages are rare.
Going the other way, duck typing can be emulated to a large degree in statically typed languages, without totally giving up the benefits of static typing. This can be done via any of the following:
This explains why there are few dynamically typed, compiled languages.
The reason that you do early binding (strong typing) is performance. With early binding, you find the location of the method at compile time, so that at run time it already knows where it lives.
However, with late binding, you have to go searching for a method that seems like the method that the client code called. And of course, with many, many method calls in a program, that's what makes dynamic languages 'slow'.
But sure, you could create a statically compiled language that does late binding, which would negate many of the advantages of static compilation.
A guess:
In a compiled language, one system (the compiler) gets to see all the code required to do strong typing. Interpreters generally only see a tiny bit of the program at a time, and so can't do that sort of cross checking.
But this isn't a hard and fast rule - it would be quite possible to make a strongly typed interpreted language, but that would go against the sort of "loose" general feel of interpreted languages.
Some languages are meant to run perfect in non-exceptional conditions and that's sacrificed for horrible performance they run into during exceptional conditions, hence very strong typing. Others were just meant to balance it with additional processing.
At times, there's way more in play than just typing. Take ActionScript for instance. 3.0 introduced stronger typing, but then again ECMAScript enables you to modify classes as you see fit at runtime and ActionScript has support for dynamic classes. Very neat, but the fact they're stating that dynamic classes should not be used in "standard" builds means it's a no-no for when you need to play it safe.