How does .NET/Mono performance compare to JVM?

2019-05-21 07:22发布

I've been wondering this for a while. Please give quantitative data to support your answer.

Related: Is there a significant difference between Windows, Mac, and linux JVM performance?

3条回答
forever°为你锁心
2楼-- · 2019-05-21 07:50

Check out The Computer Language Shootout. They compare numerous languages, and VMs, including Mono and JVM.

查看更多
我想做一个坏孩纸
3楼-- · 2019-05-21 07:50

if you are looking at startup speed, memory and CPU utilization on desktops, this might help since it is using the latest releases as of July 2010

查看更多
Explosion°爆炸
4楼-- · 2019-05-21 07:54

Shudo has done comparisons and releases source code for microbenchmarks like linpack, Scimark, etc.

Sample results for linpack:

alt text
(source: shudo.net)

But the last update was over 5 years ago - apparently using .NET V1.0 or v1.1, and mostly using JVM v1.4. Which means it is several releases out of date on both Java and .NET. You could get the source and generate your own results.


I just did this - downloaded linpack.java and linpack.cs, compiled and ran them. I used Java v1.6.0.11 from Sun, and C# 3.0 (3.5 compiler) from Microsoft. Both on Windows Vista.

For a linpack problem size of 2000, I got 17.6s for the Java version, 17.78s for the C# version.

Then I ran it again and got 18.14s for Java, and 17.31 for C#.


What are you Measuring?

This illustrates some challenges in performance measurement and testing.

  • First:
    a single trial is not enough to draw meaningful conclusions. Usually you should measure lots of trials, and then average the results.

  • Second:
    just what are you measuring? If you run a single trial of solving a single problem, then the cost of starting the process is included in the time, as well as JIT time, and any cost to fill any buffers. This may or may not be what you really want to measure.

    In many cases it is the steady-state perf you want to measure. For example in a server process, you start it once and it runs for months. Therefore the startup cost is negligible, and what you want to measure and optimize for is request throughput given a minimum average response time. Or in a "Fat Client", what you want is the time required to do video processing, and you don't want to measure process startup costs.

  • Third:
    what is the workload? Linpack and Scimark might be interesting if you do lots of floating point math. But what if you don't? What if you do lots of XML shredding, or string parsing, or integer math, or database interaction, or HTML page generation. What if your code does lots of thread management, or uses thread-synchronization primitives? What about communications and IO? What if a key portion of a transaction is encryption, or digital signature creation and verification? These benchmarks won't tell you anything about those other scenarios. For that reason you might call them micro-benchmarks.

    You need a benchmark that correctly models what you want to evaluate.


See also:
Trivial mathematical problems as language benchmarks

查看更多
登录 后发表回答