Taking out all of the obvious caveats related to benchmarks and benchmark comparison, is there any study (an array of well documented and unbiased tests) that compares the average execution speed of the two mentioned languages? Thanks
相关问题
- Delete Messages from a Topic in Apache Kafka
- Jackson Deserialization not calling deserialize on
- Sorting 3 numbers without branching [closed]
- How to maintain order of key-value in DataFrame sa
- Graphics.DrawImage() - Throws out of memory except
The best comparison that I am aware of is The Computer Language Benchmarks Game.
It compares speed, memory use and source code size for (currently) 10 benchmarks across a large number of programming languages. The implementations of the benchmarks are user-submitted and there are continuous improvements, so the standings shift around somewhat.
The comparison is currently openjdk vs C# .NET Core.
Currently it is close, but .NET Core is slightly faster on most benchmarks.
This could invite a flamewar, but hey, it's my opinion...
First of all, if your site runs too slow, you'll need better hardware. Or more hardware and add load balancing to your site. If you're Google, you'll end up with huge server farms with thousands of machines that will seem to provide a lightning-fast performance, even if the sites themselves are developed in some outdated language.
Most languages have been optimized to get the best from their hardware and will outperform any other language in this specific environment with a specific setup. Comparing languages won't make much sense because there are thousands of techniques to optimize them even more. Besides, how do you measure performance to begin with?
Let's say that you look at execution speed. Language X might perform some task 2 times faster than language Y. However, language Y is more optimized for running multiple threads and could serve 10 times more users than language X in the same amount of time. Combine this and Y would be much faster in a client/server environment.
But then install X on an optimized system with an operating system that X likes a lot, with additional hardware, a gadzillion bytes of memory and disk space and a dozen or so CPU's and X will beat Y again.
So, what's the value of knowing the execution speed of languages? Or even the comparison of languages? How do we know that the ones who created the report weren't biased? How are we sure that they used the most optimal settings for every language? Did they even write the most optimal code to be tested? And how do you compare the end results anyways? Execution time per user? Or total execution time?
Back to languages X and Y. X runs a task in 2 seconds but it supports only 10 threads at the same time, thus 10 users. Y needs 6 seconds but serves up to 50 threads at the same time. X would be faster per user. In two seconds, X has processed 10 users. In 6 seconds, X has processed 30 users. But Y would have processed 50 users by that time. Thus Y would outperform X when you have lots of users, while X would out-perform Y with a low amount of users. (Or threads.) It would be interesting to see reports mentioning this, right?
Here's a nice recent study on the subject:
Both languages are evolving in terms of performance. At least in 2013, Microsoft's own Joe Duffy blogged: